query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
7370e36cddefd67a8bb8250286d22c20
The RowHammer problem and other issues we may face as memory becomes denser
[ { "docid": "c97fe8ccd39a1ad35b5f09377f45aaa2", "text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. In our research, we experimentally measure, characterize, analyze, and model error patterns in nanoscale flash memories. Based on the understanding developed using real flash memory chips, we design techniques for more efficient and effective error management than traditionally used costly error correction codes.", "title": "" }, { "docid": "73284fdf9bc025672d3b97ca5651084a", "text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" } ]
[ { "docid": "dc76a4d28841e703b961a1126bd28a39", "text": "In this work, we study the problem of anomaly detection of the trajectories of objects in a visual scene. For this purpose, we propose a novel representation for trajectories utilizing covariance features. Representing trajectories via co-variance features enables us to calculate the distance between the trajectories of different lengths. After setting this proposed representation and calculation of distances, anomaly detection is achieved by sparse representations on nearest neighbours. Conducted experiments on both synthetic and real datasets show that the proposed method yields results which are outperforming or comparable with state of the art.", "title": "" }, { "docid": "9b45bb1734e9afc34b14fa4bc47d8fba", "text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?", "title": "" }, { "docid": "5772e4bfb9ced97ff65b5fdf279751f4", "text": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.", "title": "" }, { "docid": "fe89c8a17676b7767cfa40e7822b8d25", "text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.", "title": "" }, { "docid": "4bee6ec901c365f3780257ed62b7c020", "text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.", "title": "" }, { "docid": "efd3280939a90041f50c4938cf886deb", "text": "A distributed double integrator discrete time consensus protocol is presented along with stability analysis. The protocol will achieve consensus when the communication topology contains at least a directed spanning tree. Average consensus is achieved when the communication topology is strongly connected and balanced, where average consensus for double integrator systems is discussed. For second order systems average consensus occurs when the information states tend toward the average of the current information states not their initial values. Lastly, perturbation to the consensus protocol is addressed. Using a designed perturbation input, an algorithm is presented that accurately tracks the center of a vehicle formation in a decentralized manner.", "title": "" }, { "docid": "6421979368a138e4b21ab7d9602325ff", "text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.", "title": "" }, { "docid": "0963b6b27b57575bd34ff8f5bd330536", "text": "The human ocular surface spans from the conjunctiva to the cornea and plays a critical role in visual perception. Cornea, the anterior portion of the eye, is transparent and provides the eye with two-thirds of its focusing power and protection of ocular integrity. The cornea consists of five main layers, namely, corneal epithelium, Bowman’s layer, corneal stroma, Descemet’s membrane and corneal endothelium. The outermost layer of the cornea, which is exposed to the external environment, is the corneal epithelium. Corneal epithelial integrity and transparency are maintained by somatic stem cells (SC) that reside in the limbus. The limbus, an anatomical structure 1-2 mm wide, circumscribes the peripheral cornea and separates it from the conjunctiva (Cotsarelis et al., 1989, Davanger and Evensen, 1971) (Figure 1). Any damage to the ocular surface by burns, or various infections, can threaten vision. The most insidious of such damaging conditions is limbal stem cell deficiency (LSCD). Clinical signs of LSCD include corneal vascularization, chronic stromal inflammation, ingrowth of conjunctival epithelium onto the corneal surface and persistent epithelial defects (Lavker et al., 2004). Primary limbal stem cell deficiency is associated with aniridia and ectodermal dysplasia. Acquired limbal stem cell deficiency has been associated with inflammatory conditions (Stevens–Johnson syndrome (SJS), ocular cicatricial pemphigoid), ocular trauma (chemical and thermal burns), contact lens wear, corneal infection, neoplasia, peripheral ulcerative corneal disease and neurotrophic keratopathy (Dua et al., 2000, Jeng et al., 2011). Corneal stem cells and/or their niche are known to play important anti-angiogenic and anti-inflamatory roles in maintaining a normal corneal microenvironment, the destruction of which in LSCD, tips the balance toward pro-angiogenic conditions (Lim et al., 2009). For a long time, the primary treatment for LSCD has been transplantation of healthy keratolimbal tissue from autologous, allogenic, or cadaveric sources. In the late 1990s, cultured, autologous, limbal epithelial cell implants were used successfully to improve vision in two patients with chemical injury-induced LSCD (Pellegrini et al., 1997). Since then, transplantation of cultivated epithelial (stem) cells has become a treatment of choice for numerous LSCD patients worldwide. While the outcomes are promising, the variability of methodologies used to expand the cells, points to an underlying need for better standardization of ex vivo cultivation-based therapies and their outcome measures (Sangwan et al., 2005, Ti et al., 2004, Grueterich et al., 2002b, Kolli et al., 2010).", "title": "" }, { "docid": "9a6ce56536585e54d3e15613b2fa1197", "text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.", "title": "" }, { "docid": "8c4d4567cf772a76e99aa56032f7e99e", "text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.", "title": "" }, { "docid": "e0320fc4031a4d1d09c9255012c3d03c", "text": "We develop a model of premium sharing for firms that offer multiple insurance plans. We assume that firms offer one low quality plan and one high quality plan. Under the assumption of wage rigidities we found that the employee's contribution to each plan is an increasing function of that plan's premium. The effect of the other plan's premium is ambiguous. We test our hypothesis using data from the Employer Health Benefit Survey. Restricting the analysis to firms that offer both HMO and PPO plans, we measure the amount of the premium passed on to employees in response to a change in both premiums. We find evidence of large and positive effects of the increase in the plan's premium on the amount of the premium passed on to employees. The effect of the alternative plan's premium is negative but statistically significant only for the PPO plans.", "title": "" }, { "docid": "dcd116e601c9155d60364c19a1f0dfb7", "text": "The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.", "title": "" }, { "docid": "5da2747dd2c3fe5263d8bfba6e23de1f", "text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.", "title": "" }, { "docid": "b059f6d2e9f10e20417f97c05d92c134", "text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.", "title": "" }, { "docid": "d06c91afbfd79e40d0d6fe326e3be957", "text": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences.", "title": "" }, { "docid": "b92484f67bf2d3f71d51aee9fb7abc86", "text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.", "title": "" }, { "docid": "f1681e1c8eef93f15adb5a4d7313c94c", "text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.", "title": "" }, { "docid": "139ecd9ff223facaec69ad6532f650db", "text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.", "title": "" }, { "docid": "b5aad69e6a0f672cdaa1f81187a48d57", "text": "In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.", "title": "" } ]
scidocsrr
47b0cae56e5e04ca4fa7e91be1b8c7d1
Empathy and Its Modulation in a Virtual Human
[ { "docid": "8efee8d7c3bf229fa5936209c43a7cff", "text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.", "title": "" } ]
[ { "docid": "ec8ffeb175dbd392e877d7704705f44e", "text": "Business Intelligence (BI) solutions commonly aim at assisting decision-making processes by providing a comprehensive view over a company’s core business data and suitable abstractions thereof. Decision-making based on BI solutions therefore builds on the assumption that providing users with targeted, problemspecific fact data enables them to make informed and, hence, better decisions in their everyday businesses. In order to really provide users with all the necessary details to make informed decisions, we however believe that – in addition to conventional reports – it is essential to also provide users with information about the quality, i.e. with quality metadata, regarding the data from which reports are generated. Identifying a lack of support for quality metadata management in conventional BI solutions, in this paper we propose the idea of quality-aware reports and a possible architecture for quality-aware BI, able to involve the users themselves into the quality metadata management process, by explicitly soliciting and exploiting user feedback.", "title": "" }, { "docid": "2d86a717ef4f83ff0299f15ef1df5b1b", "text": "Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an internal context change before study of the target information. Here we report the results of four experiments, in which we demonstrate that all three forms of release from PI are accompanied by a decrease in participants’ response latencies. Because response latency is a sensitive index of the size of participants’ mental search set, the results suggest that release from PI can reflect more focused memory search, with the previously studied nontarget items being largely eliminated from the search process. Our results thus provide direct evidence for a critical role of retrieval processes in PI release. 2012 Elsevier Inc. All rights reserved. Introduction buildup of PI is caused by a failure to distinguish items Proactive interference (PI) refers to the finding that memory for recently studied information can be vastly impaired by the previous study of further information (e.g., Underwood, 1957). In a typical PI experiment, participants study a (target) list of items and are later tested on it. In the PI condition, participants study further (nontarget) lists that precede encoding of the target information, whereas in the no-PI condition participants engage in an unrelated distractor task. Typically, recall of the target list is worse in the PI condition than the no-PI condition, which reflects the PI finding. PI has been extensively studied in the past century, has proven to be a very robust finding, and has been suggested to be one of the major causes of forgetting in everyday life (e.g., Underwood, 1957; for reviews, see Anderson & Neely, 1996; Crowder, 1976). Over the years, a number of theories have been put forward to account for PI, most of them suggesting a critical role of retrieval processes in this form of forgetting. For instance, temporal discrimination theory suggests that . All rights reserved. ie.uni-regensburg.de from the most recent target list from items that appeared on the earlier nontarget lists. Specifically, the theory assumes that at test participants are unable to restrict their memory search to the target list and instead search the entire set of items that have previously been exposed (Baddeley, 1990; Crowder, 1976; Wixted & Rohrer, 1993). Another retrieval account attributes PI to a generation failure. Here, reduced recall levels of the target items are thought to be due to the impaired ability to access the material’s correct memory representation (Dillon & Thomas, 1975). In contrast to these retrieval explanations of PI, some theories also suggested a role of encoding factors in PI, assuming that the prior study of other lists impairs subsequent encoding of the target list. For instance, attentional resources may deteriorate across item lists and cause the target material to be less well processed in the presence than the absence of the preceding lists (e.g., Crowder, 1976).", "title": "" }, { "docid": "b085860a27df6604c6dc38cd9fbd0b75", "text": "A number of factors are considered during the analysis of automobile transportation with respect to increasing safety. One of the vital factors for night-time travel is temporary blindness due to increase in the headlight intensity. While headlight intensity provides better visual acuity, it simultaneously affects oncoming traffic. This problem is encountered when both drivers are using a higher headlight intensity setting. Also, increased speed of the vehicles due to decreased traffic levels at night increases the severity of accidents. In order to reduce accidents due to temporary driver blindness, a wireless sensor network (WSN) based controller could be developed to transmit sensor data in a faster and an efficient way between cars. Low latency allows faster headlight intensity adjustment between the vehicles to drastically reduce the cause of temporary blindness. An attempt has been made to come up with a system which would sense the intensity of the headlight of the oncoming vehicle and depending on the threshold headlight intensity being set in the system it would automatically reduce the intensity of the headlight of the oncoming vehicle using wireless sensor network thus reducing the condition of temporary blindness caused due to excessive exposure to headlights.", "title": "" }, { "docid": "c68397cdbe538fd22fe88c0ff4e47879", "text": "With the higher demand of the three dimensional (3D) imaging, a high definition real-time 3D video system based on FPGA is proposed. The system is made up of CMOS image sensors, DDR2 SDRAM, High Definition Multimedia Interface (HDMI) transmitter and Field Programmable Gate Array (FPGA). CMOS image sensor produces digital video streaming. DDR2 SDRAM buffers large amount of video data. FPGA processes the video streaming and realizes 3D data format conversion. HDMI transmitter is utilized to transmit 3D format data. Using the active 3D display device and shutter glasses, the system can achieve the living effect of real-time 3D high definition imaging. The resolution of the system is 720p@60Hz in 3D mode.", "title": "" }, { "docid": "bd4dde3f5b7ec9dcd711a538b973ef1e", "text": "Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigrambased F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/.", "title": "" }, { "docid": "c9b278eea7f915222cf8e99276fb5af2", "text": "Pseudorandom generators based on linear feedback shift registers (LFSR) are a traditional building block for cryptographic stream ciphers. In this report, we review the general idea for such generators, as well as the most important techniques of cryptanalysis.", "title": "" }, { "docid": "738f60fbfe177eec52057c8e5ab43e55", "text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.", "title": "" }, { "docid": "d353db098a7ca3bd9dc73b803e7369a2", "text": "DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.", "title": "" }, { "docid": "1be6aecdc3200ed70ede2d5e96cb43be", "text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.", "title": "" }, { "docid": "63339fb80c01c38911994cd326e483a3", "text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.", "title": "" }, { "docid": "dfc7a31461a382f0574fadf36a8fd211", "text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Road Traffic Accident is very serious matter of life. The World Health Organization (WHO) reports that about 1.24 million people of the world die annually on the roads. The Institute for Health Metrics and Evaluation (IHME) estimated about 907,900, 1.3 million and 1.4 million deaths from road traffic injuries in 1990, 2010 and 2013, respectively. Uttar Pradesh in particular one of the state of India, experiences the highest rate of such accidents. Thus, methods to reduce accident severity are of great interest to traffic agencies and the public at large. In this paper, we applied data mining technologies to link recorded road characteristics to accident severity and developed a set of rules that could be used by the Indian Traffic Agency to improve safety and could help to save precious life.", "title": "" }, { "docid": "689f1a8a6e8a1267dd45db32f3b711f6", "text": "Today, the digitalization strides tremendously on all the sides of the modern society. One of the enablers to keep this process secure is the authentication. It touches many different areas of the connected world including payments, communications, and access right management. This manuscript attempts to shed the light on the authentication systems' evolution towards Multi-factor Authentication (MFA) from Singlefactor Authentication (SFA) and through Two-factor Authentication (2FA). Particularly, MFA is expected to be utilized for the user and vehicle-to-everything (V2X) interaction which is selected as descriptive scenario. The manuscript is focused on already available and potentially integrated sensors (factor providers) to authenticate the occupant from inside the vehicle. The survey on existing vehicular systems suitable for MFA is given. Finally, the MFA system based on reversed Lagrange polynomial, utilized in Shamir's Secret Sharing (SSS), was proposed to enable flexible in-car authentication. The solution was further extended covering the cases of authenticating the user even if some of the factors are mismatched or absent. The framework allows to qualify the missing factor and authenticate the user without providing the sensitive biometric data to the verification entity. The proposed is finally compared to conventional SSS.", "title": "" }, { "docid": "9d3778091b10c6352559fb51faace714", "text": "Aims to provide an analysis of the introduction of Internet-based skills into small firms. Seeks to contribute to the wider debate on the content and style of training most appropriate for employees and managers of SMEs.", "title": "" }, { "docid": "5867f20ff63506be7eccb6c209ca03cc", "text": "When creating a virtual environment open to the public a number of challenges have to be addressed. The equipment has to be chosen carefully in order to be be able to withstand hard everyday usage, and the application has not only to be robust and easy to use, but has also to be appealing to the user, etc. The current paper presents findings gathered from the creation of a multi-thematic virtual museum environment to be offered to visitors of real world museums. A number of design and implementation aspects are described along with an experiment designed to evaluate alternative approaches for implementing the navigation in a virtual museum environment. The paper is concluded with insights gained from the development of the virtual museum and portrays future research plans.", "title": "" }, { "docid": "64f4a275dce1963b281cd0143f5eacdc", "text": "Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.", "title": "" }, { "docid": "a1b50cf02ef0e37aed3d941ea281b885", "text": "Collaborative filtering and content-based methods are two main approaches for recommender systems, and hybrid models use advantages of both. In this paper, we made a comparison of a hybrid model, which uses Bayesian Staked Denoising Autoencoders for content learning, and a collaborative filtering method, Bayesian Nonnegative Matrix Factorisation. It is shown that the tightly coupled hybrid model, Collaborative Deep Learning, gave more successful results comparing to collaborative filtering methods.", "title": "" }, { "docid": "d9c4bdd95507ef497db65fc80d3508c5", "text": "3D content creation is referred to as one of the most fundamental tasks of computer graphics. And many 3D modeling algorithms from 2D images or curves have been developed over the past several decades. Designers are allowed to align some conceptual images or sketch some suggestive curves, from front, side, and top views, and then use them as references in constructing a 3D model automatically or manually. However, to the best of our knowledge, no studies have investigated on 3D human body reconstruction in a similar manner. In this paper, we propose a deep learning based reconstruction of 3D human body shape from 2D orthographic views. A novel CNN-based regression network, with two branches corresponding to frontal and lateral views respectively, is designed for estimating 3D human body shape from 2D mask images. We train our networks separately to decouple the feature descriptors which encode the body parameters from different views, and fuse them to estimate an accurate human body shape. In addition, to overcome the shortage of training data required for this purpose, we propose some significantly data augmentation schemes for 3D human body shapes, which can be used to promote further research on this topic. Extensive experimental results demonstrate that visually realistic and accurate reconstructions can be achieved effectively using our algorithm. Requiring only binary mask images, our method can help users create their own digital avatars quickly, and also make it easy to create digital human body for 3D game, virtual reality, online fashion shopping.", "title": "" }, { "docid": "39ed08e9a08b7d71a4c177afe8f0056a", "text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a40c00b1dc4a8d795072e0a8cec09d7a", "text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.", "title": "" }, { "docid": "e65d522f6b08eeebb8a488b133439568", "text": "We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, a weak saliency map is constructed based on image priors to generate training samples for a strong model. Second, a strong classifier based on samples directly from an input image is learned to detect salient pixels. Results from multiscale saliency maps are integrated to further improve the detection performance. Extensive experiments on six benchmark datasets demonstrate that the proposed bootstrap learning algorithm performs favorably against the state-of-the-art saliency detection methods. Furthermore, we show that the proposed bootstrap learning approach can be easily applied to other bottom-up saliency models for significant improvement.", "title": "" } ]
scidocsrr
7db887e32b328c1d584dcef17552323a
Lares: An Architecture for Secure Active Monitoring Using Virtualization
[ { "docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83", "text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.", "title": "" }, { "docid": "14dd650afb3dae58ffb1a798e065825a", "text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.", "title": "" } ]
[ { "docid": "5bc2b92a3193c36bac5ae848da7974a3", "text": "Robust real-time tracking of non-rigid objects is a challenging task. Particle filtering has proven very successful for non-linear and nonGaussian estimation problems. The article presents the integration of color distributions into particle filtering, which has typically been used in combination with edge-based image features. Color distributions are applied, as they are robust to partial occlusion, are rotation and scale invariant and computationally efficient. As the color of an object can vary over time dependent on the illumination, the visual angle and the camera parameters, the target model is adapted during temporally stable image observations. An initialization based on an appearance condition is introduced since tracked objects may disappear and reappear. Comparisons with the mean shift tracker and a combination between the mean shift tracker and Kalman filtering show the advantages and limitations of the new approach. q 2002 Published by Elsevier Science B.V.", "title": "" }, { "docid": "133b2f033245dad2a2f35ff621741b2f", "text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.", "title": "" }, { "docid": "99ba1fd6c96dad6d165c4149ac2ce27a", "text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.", "title": "" }, { "docid": "f321ba1ee0f68612d7c463a37708a1e7", "text": "Non-orthogonal multiple access (NOMA) is a promising technique for the fifth generation mobile communication due to its high spectral efficiency. By applying superposition coding and successive interference cancellation techniques at the receiver, multiple users can be multiplexed on the same subchannel in NOMA systems. Previous works focus on subchannel assignment and power allocation to achieve the maximization of sum rate; however, the energy-efficient resource allocation problem has not been well studied for NOMA systems. In this paper, we aim to optimize subchannel assignment and power allocation to maximize the energy efficiency for the downlink NOMA network. Assuming perfect knowledge of the channel state information at base station, we propose a low-complexity suboptimal algorithm, which includes energy-efficient subchannel assignment and power proportional factors determination for subchannel multiplexed users. We also propose a novel power allocation across subchannels to further maximize energy efficiency. Since both optimization problems are non-convex, difference of convex programming is used to transform and approximate the original non-convex problems to convex optimization problems. Solutions to the resulting optimization problems can be obtained by solving the convex sub-problems iteratively. Simulation results show that the NOMA system equipped with the proposed algorithms yields much better sum rate and energy efficiency performance than the conventional orthogonal frequency division multiple access scheme.", "title": "" }, { "docid": "e982aa23c644bad4870bafaf7344d15a", "text": "In this work we introduce a structured prediction model that endows the Deep Gaussian Conditional Random Field (G-CRF) with a densely connected graph structure. We keep memory and computational complexity under control by expressing the pairwise interactions as inner products of low-dimensional, learnable embeddings. The G-CRF system matrix is therefore low-rank, allowing us to solve the resulting system in a few milliseconds on the GPU by using conjugate gradient. As in G-CRF, inference is exact, the unary and pairwise terms are jointly trained end-to-end by using analytic expressions for the gradients, while we also develop even faster, Potts-type variants of our embeddings. We show that the learned embeddings capture pixel-to-pixel affinities in a task-specific manner, while our approach achieves state of the art results on three challenging benchmarks, namely semantic segmentation, human part segmentation, and saliency estimation. Our implementation is fully GPU based, built on top of the Caffe library, and is available at https://github.com/siddharthachandra/gcrf-v2.0.", "title": "" }, { "docid": "d6b3969a6004b5daf9781c67c2287449", "text": "Lotilaner is a new oral ectoparasiticide from the isoxazoline class developed for the treatment of flea and tick infestations in dogs. It is formulated as pure S-enantiomer in flavoured chewable tablets (Credelio™). The pharmacokinetics of lotilaner were thoroughly determined after intravenous and oral administration and under different feeding regimens in dogs. Twenty-six adult beagle dogs were enrolled in a pharmacokinetic study evaluating either intravenous or oral administration of lotilaner. Following the oral administration of 20 mg/kg, under fed or fasted conditions, or intravenous administration of 3 mg/kg, blood samples were collected up to 35 days after treatment. The effects of timing of offering food and the amount of food consumed prior or after dosing on bioavailability were assessed in a separate study in 25 adult dogs. Lotilaner blood concentrations were measured using a validated liquid chromatography/tandem mass spectrometry (LC-MS/MS) method. Pharmacokinetic parameters were calculated by non-compartmental analysis. In addition, in vivo enantiomer stability was evaluated in an analytical study. Following oral administration in fed animals, lotilaner was readily absorbed and peak blood concentrations reached within 2 hours. The terminal half-life was 30.7 days. Food enhanced the absorption, providing an oral bioavailability above 80% and reduced the inter-individual variability. Moreover, the time of feeding with respect to dosing (fed 30 min prior, fed at dosing or fed 30 min post-dosing) or the reduction of the food ration to one-third of the normal daily ration did not impact bioavailability. Following intravenous administration, lotilaner had a low clearance of 0.18 l/kg/day, large volumes of distribution Vz and Vss of 6.35 and 6.45 l/kg, respectively and a terminal half-life of 24.6 days. In addition, there was no in vivo racemization of lotilaner. The pharmacokinetic properties of lotilaner administered orally as a flavoured chewable tablet (Credelio™) were studied in detail. With a Tmax of 2 h and a terminal half-life of 30.7 days under fed conditions, lotilaner provides a rapid onset of flea and tick killing activity with consistent and sustained efficacy for at least 1 month.", "title": "" }, { "docid": "68693c88cb62ce28514344d15e9a6f09", "text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.", "title": "" }, { "docid": "22b2eda49d67e83a1aa526abf9074734", "text": "A new member of polyhydroxyalkanoates (PHA) family, namely, a terpolyester abbreviated as PHBVHHx consisting of 3-hydroxybutyrate (HB), 3-hydroxyvalerate (HV) and 3-hydroxyhexanoate (HHx) that can be produced by recombinant microorganisms, was found to have proper thermo- and mechanical properties for possible skin tissue engineering, as demonstrated by its strong ability to support the growth of human keratinocyte cell line HaCaT. In this study, HaCaT cells showed the strongest viability and the highest growth activity on PHBVHHx film compared with PLA, PHB, PHBV, PHBHHx and P3HB4HB, even the tissue culture plates were grown with less HaCaT cells compared with that on PHBVHHx. To understand its superior biocompatibility, PHBVHHx nanoparticles ranging from 200 to 350nm were prepared. It was found that the nanoparticles could increase the cellular activities by stimulating a rapid increase of cytosolic calcium influx in HaCaT cells, leading to enhanced cell growth. At the same time, 3-hydroxybutyrate (HB), a degradation product and the main component of PHBVHHx, was also shown to promote HaCaT proliferation. Morphologically, under the same preparation conditions, PHBVHHx film showed the most obvious surface roughness under atomic force microscopy (AFM), accompanied by the lowest surface energy compared with all other well studied biopolymers tested above. These results explained the superior ability for PHBVHHx to grow skin HaCaT cells. Therefore, PHBVHHx possesses the suitability to be developed into a skin tissue-engineered material.", "title": "" }, { "docid": "59a06c71efeb218e85955f17edc42bf1", "text": "Toyota Hybrid System is the innovative powertrain used in the current best-selling hybrid vehicle on the market—the Prius. It uses a split-type hybrid configuration which contains both a parallel and a serial power path to achieve the benefits of both. The main purpose of this paper is to develop a dynamic model to investigate the unique design of THS, which will be used to analyze the control strategy, and explore the potential of further improvement. A Simulink model is developed and a control algorithm is derived. Simulations confirm our model captures the fundamental behavior of THS reasonably well.", "title": "" }, { "docid": "0f4ac688367d3ea43643472b7d75ffc9", "text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.", "title": "" }, { "docid": "dcf8c1a5445ad3c2e475b296cb72b18e", "text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading markov decision processes discrete stochastic dynamic programming is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.", "title": "" }, { "docid": "39598533576bdd3fa94df5a6967b9b2d", "text": "Genetic Algorithm (GA) and other Evolutionary Algorithms (EAs) have been successfully applied to solve constrained minimum spanning tree (MST) problems of the communication network design and also have been used extensively in a wide variety of communication network design problems. Choosing an appropriate representation of candidate solutions to the problem is the essential issue for applying GAs to solve real world network design problems, since the encoding and the interaction of the encoding with the crossover and mutation operators have strongly influence on the success of GAs. In this paper, we investigate a new encoding crossover and mutation operators on the performance of GAs to design of minimum spanning tree problem. Based on the performance analysis of these encoding methods in GAs, we improve predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. The proposed crossover and mutation operators offer locality, heritability, and computational efficiency. We compare with the approach to others that encode candidate spanning trees via the Pr?fer number-based encoding, edge set-based encoding, and demonstrate better results on larger instances for the communication spanning tree design problems. key words: minimum spanning tree (MST), communication network design, genetic algorithm (GA), node-based encoding", "title": "" }, { "docid": "1aa27b05e046927d75a8cabb60506a9e", "text": "Accurate road detection and centerline extraction from very high resolution (VHR) remote sensing imagery are of central importance in a wide range of applications. Due to the complex backgrounds and occlusions of trees and cars, most road detection methods bring in the heterogeneous segments; besides for the centerline extraction task, most current approaches fail to extract a wonderful centerline network that appears smooth, complete, as well as single-pixel width. To address the above-mentioned complex issues, we propose a novel deep model, i.e., a cascaded end-to-end convolutional neural network (CasNet), to simultaneously cope with the road detection and centerline extraction tasks. Specifically, CasNet consists of two networks. One aims at the road detection task, whose strong representation ability is well able to tackle the complex backgrounds and occlusions of trees and cars. The other is cascaded to the former one, making full use of the feature maps produced formerly, to obtain the good centerline extraction. Finally, a thinning algorithm is proposed to obtain smooth, complete, and single-pixel width road centerline network. Extensive experiments demonstrate that CasNet outperforms the state-of-the-art methods greatly in learning quality and learning speed. That is, CasNet exceeds the comparing methods by a large margin in quantitative performance, and it is nearly 25 times faster than the comparing methods. Moreover, as another contribution, a large and challenging road centerline data set for the VHR remote sensing image will be publicly available for further studies.", "title": "" }, { "docid": "5b7483a4dea12d8b07921c150ccc66ee", "text": "OBJECTIVE\nWe reviewed the efficacy of occupational therapy-related interventions for adults with rheumatoid arthritis.\n\n\nMETHOD\nWe examined 51 Level I studies (19 physical activity, 32 psychoeducational) published 2000-2014 and identified from five databases. Interventions that focused solely on the upper or lower extremities were not included.\n\n\nRESULTS\nFindings related to key outcomes (activities of daily living, ability, pain, fatigue, depression, self-efficacy, disease symptoms) are presented. Strong evidence supports the use of aerobic exercise, resistive exercise, and aquatic therapy. Mixed to limited evidence supports dynamic exercise, Tai Chi, and yoga. Among the psychoeducation interventions, strong evidence supports the use of patient education, self-management, cognitive-behavioral approaches, multidisciplinary approaches, and joint protection, and limited or mixed evidence supports the use of assistive technology and emotional disclosure.\n\n\nCONCLUSION\nThe evidence supports interventions within the scope of occupational therapy practice for rheumatoid arthritis, but few interventions were occupation based.", "title": "" }, { "docid": "719ca13e95b9b4a1fc68772746e436d9", "text": "The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.", "title": "" }, { "docid": "f75ace78cc5c82e49ee5d5481f294dbf", "text": "This paper presents the design and fabrication of Sierpinski gasket fractal antenna with defected ground structure (DGS) with center frequency at 5.8 GHz. A slot was used as a DGS. The antenna was designed and simulated using Computer Simulation Technology (CST) software and fabricated on FR-4 board with a substrate thickness of 1.6 mm and dielectric constant, er of 5.0 and dielectric loss tangent 0.025. Measurement of the parameters of the antenna was carried by using a Vector Network Analyzer. The result shows a good agreement between simulation and measurement and a compact size antenna was realized.", "title": "" }, { "docid": "42cfbb2b2864e57d59a72ec91f4361ff", "text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.", "title": "" }, { "docid": "dba5777004cf43d08a58ef3084c25bd3", "text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.", "title": "" }, { "docid": "c3195ff8dc6ca8c130f5a96ebe763947", "text": "The recent emergence of Cloud Computing has drastically altered everyone’s perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.", "title": "" } ]
scidocsrr
7c944862dfcc3f89cd284ac16b50f486
Grouping Synonymous Sentences from a Parallel Corpus
[ { "docid": "4361b4d2d77d22f46b9cd5920a4822c8", "text": "While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases.", "title": "" } ]
[ { "docid": "ee5eb52575cf01b825b244d9391c6f5c", "text": "We present a data-driven framework called generative adversarial privacy (GAP). Inspired by recent advancements in generative adversarial networks (GANs), GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. We show that for appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. We also evaluate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI face database. KeywordsData Privacy, Differential Privacy, Adversarial Learning, Generative Adversarial Networks, Minimax Games, Information Theory", "title": "" }, { "docid": "30ba59e335d9b448b29d2528b5e08a5c", "text": "Classification of alcoholic electroencephalogram (EEG) signals is a challenging job in biomedical research for diagnosis and treatment of brain diseases of alcoholic people. The aim of this study was to introduce a robust method that can automatically identify alcoholic EEG signals based on time–frequency (T–F) image information as they convey key characteristics of EEG signals. In this paper, we propose a new hybrid method to classify automatically the alcoholic and control EEG signals. The proposed scheme is based on time–frequency images, texture image feature extraction and nonnegative least squares classifier (NNLS). In T–F analysis, the spectrogram of the short-time Fourier transform is considered. The obtained T–F images are then converted into 8-bit grayscale images. Co-occurrence of the histograms of oriented gradients (CoHOG) and Eig(Hess)-CoHOG features are extracted from T–F images. Finally, obtained features are fed into NNLS classifier as input for classify alcoholic and control EEG signals. To verify the effectiveness of the proposed approach, we replace the NNLS classifier by artificial neural networks, k-nearest neighbor, linear discriminant analysis and support vector machine classifier separately, with the same features. Experimental outcomes along with comparative evaluations with the state-of-the-art algorithms manifest that the proposed method outperforms competing algorithms. The experimental outcomes are promising, and it can be anticipated that upon its implementation in clinical practice, the proposed scheme will alleviate the onus of the physicians and expedite neurological diseases diagnosis and research.", "title": "" }, { "docid": "37c005b87b3ccdfad86c760ecba7b8de", "text": "Intelligent processing of complex signals such as images is often performed by a hierarchy of nonlinear processing layers, such as a deep net or an object recognition cascade. Joint estimation of the parameters of all the layers is a difficult nonconvex optimization. We describe a general strategy to learn the parameters and, to some extent, the architecture of nested systems, which we call themethod of auxiliary coordinates (MAC) . This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, can perform some model selection on the fly, and is competitive with stateof-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations. The continued increase in recent years in data availability and processing power has enabled the development and practical applicability of ever more powerful models in sta tistical machine learning, for example to recognize faces o r speech, or to translate natural language. However, physical limitations in serial computation suggest that scalabl e processing will require algorithms that can be massively parallelized, so they can profit from the thousands of inexpensive processors available in cloud computing. We focus on hierarchical, or nested, processing architectures. As a particular but important example, consider deep neuAppearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors. ral nets (fig. 1), which were originally inspired by biological systems such as the visual and auditory cortex in the mammalian brain (Serre et al., 2007), and which have been proven very successful at learning sophisticated task s, such as recognizing faces or speech, when trained on data.", "title": "" }, { "docid": "82535c102f41dc9d47aa65bd71ca23be", "text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.", "title": "" }, { "docid": "10d8bbea398444a3fb6e09c4def01172", "text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.", "title": "" }, { "docid": "0798ed2ff387823bcd7572a9ddf6a5e1", "text": "We present a novel algorithm for point cloud segmentation using group convolutions. Our approach uses a radial basis function (RBF) based variational autoencoder (VAE) network. We transform unstructured point clouds into regular voxel grids and use subvoxels within each voxel to encode the local geometry using a VAE architecture. In order to handle sparse distribution of points within each voxel, we use RBF to compute a local, continuous representation within each subvoxel. We extend group equivariant convolutions to 3D point cloud processing and increase the expressive capacity of the neural network. The combination of RBF and VAE results in a good volumetric representation that can handle noisy point cloud datasets and is more robust for learning. We highlight the performance on standard benchmarks and compare with prior methods. In practice, our approach outperforms state-of-the-art segmentation algorithms on the ShapeNet and S3DIS datasets.", "title": "" }, { "docid": "3f9e5be7bfe8c28291758b0670afc61c", "text": "Grayscale error di usion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. In color error di usion what color to render is a major concern in addition to nding optimal dot patterns. This article presents a survey of key methods for artifact reduction in grayscale and color error di usion. The linear gain model by Kite et al. replaces the thresholding quantizer with a scalar gain plus additive noise. They show that the sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit ipping quantizer (Damera-Venkata and Evans, 2001). Several other variations on grayscale error di usion have been proposed to reduce false textures in shadow and highlight regions, including green noise halftoning (Levien, 1993) and tone-dependent error di usion (Li and Allebach, 2002). Color error di usion ideally requires the quantization error to be di used to frequencies and colors, to which the HVS is least sensitive. We review the following approaches: color plane separable (Kolpatzik and Bouman 1992) design; perceptual quantization (Shaked et al. 1996, Haneishi et al. 1996) ; green noise extensions (Lau et al. 2000); and matrix-valued error lters (Damera-Venkata and Evans, 2001).", "title": "" }, { "docid": "ebaf73ec27127016f3327e6a0b88abff", "text": "A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can’t communicate about their medical history to the medical practitioners. Also, Medical practitioners can’t edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.", "title": "" }, { "docid": "4eb1e28d62af4a47a2e8dc795b89cc09", "text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.", "title": "" }, { "docid": "764eba2c2763db6dce6c87170e06d0f8", "text": "Kansei Engineering was developed as a consumer-oriented technology for new product development. It is defined as \"translating technology of a consumer's feeling and image for a product into design elements\". Kansei Engineering (KE) technology is classified into three types, KE Type I, II, and III. KE Type I is a category classification on the new product toward the design elements. Type II utilizes the current computer technologies such as Expert System, Neural Network Model and Genetic Algorithm. Type III is a model using a mathematical structure. Kansei Engineering has permeated Japanese industries, including automotive, electrical appliance, construction, clothing and so forth. The successful companies using Kansei Engineering benefited from good sales regarding the new consumer-oriented products. Relevance to industry Kansei Engineering is utilized in the automotive, electrical appliance, construction, clothing and other industries. This paper provides help to potential product designers in these industries.", "title": "" }, { "docid": "132880bc2af0e8ce5e0dc04b0ff397f6", "text": "The need to have equitable access to quality healthcare is enshrined in the United Nations (UN) Sustainable Development Goals (SDGs), which defines the developmental agenda of the UN for the next 15 years. In particular, the third SDG focuses on the need to “ensure healthy lives and promote well-being for all at all ages”. In this paper, we build the case that 5G wireless technology, along with concomitant emerging technologies (such as IoT, big data, artificial intelligence and machine learning), will transform global healthcare systems in the near future. Our optimism around 5G-enabled healthcare stems from a confluence of significant technical pushes that are already at play: apart from the availability of high-throughput low-latency wireless connectivity, other significant factors include the democratization of computing through cloud computing; the democratization of Artificial Intelligence (AI) and cognitive computing (e.g., IBM Watson); and the commoditization of data through crowdsourcing and digital exhaust. These technologies together can finally crack a dysfunctional healthcare system that has largely been impervious to technological innovations. We highlight the persistent deficiencies of the current healthcare system and then demonstrate how the 5G-enabled healthcare revolution can fix these deficiencies. We also highlight open technical research challenges, and potential pitfalls, that may hinder the development of such a 5G-enabled health revolution.", "title": "" }, { "docid": "066eef8e511fac1f842c699f8efccd6b", "text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "cd8cad6445b081e020d90eb488838833", "text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.", "title": "" }, { "docid": "062149cd37d1e9f04f32bd6b713f10ab", "text": "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).", "title": "" }, { "docid": "8bdd02547be77f4c825c9aed8016ddf8", "text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.", "title": "" }, { "docid": "79ff4bd891538a0d1b5a002d531257f2", "text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.", "title": "" }, { "docid": "c3eaaa0812eb9ab7e5402339733daa28", "text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.", "title": "" }, { "docid": "0ff3e49a700a776c1a8f748d78bc4b73", "text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.", "title": "" }, { "docid": "895f912a24f00984922c586880f77dee", "text": "Massive multiple-input multiple-output technology has been considered a breakthrough in wireless communication systems. It consists of equipping a base station with a large number of antennas to serve many active users in the same time-frequency block. Among its underlying advantages is the possibility to focus transmitted signal energy into very short-range areas, which will provide huge improvements in terms of system capacity. However, while this new concept renders many interesting benefits, it brings up new challenges that have called the attention of both industry and academia: channel state information acquisition, channel feedback, instantaneous reciprocity, statistical reciprocity, architectures, and hardware impairments, just to mention a few. This paper presents an overview of the basic concepts of massive multiple-input multiple-output, with a focus on the challenges and opportunities, based on contemporary research.", "title": "" }, { "docid": "122e31e413efd0f96860661d461ce780", "text": "Recent years have seen a dramatic increase in research and development of scientific workflow systems. These systems promise to make scientists more productive by automating data-driven and computeintensive analyses. Despite many early achievements, the long-term success of scientific workflow technology critically depends on making these systems useable by ‘‘mere mortals’’, i.e., scientists who have a very good idea of the analysis methods they wish to assemble, but who are neither software developers nor scripting-language experts. With these users in mind, we identify a set of desiderata for scientific workflow systems crucial for enabling scientists to model and design the workflows they wish to automate themselves. As a first step towards meeting these requirements, we also show how the collection-oriented modeling and design (comad) approach for scientific workflows, implemented within the Kepler system, can help provide these critical, design-oriented capabilities to scientists. © 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
a90986c95d2e4c08094b461909151d99
Web-Service Clustering with a Hybrid of Ontology Learning and Information-Retrieval-Based Term Similarity
[ { "docid": "639bbe7b640c514ab405601c7c3cfa01", "text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.", "title": "" } ]
[ { "docid": "27488ded8276967b9fd71ec40eec28d8", "text": "This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.", "title": "" }, { "docid": "7c8f38386322d9095b6950c4f31515a0", "text": "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.", "title": "" }, { "docid": "23ef781d3230124360f24cc6e38fb15f", "text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "301fb951bb2720ebc71202ee7be37be2", "text": "This work incorporates concepts from the behavioral confirmation tradition, self tradition, and interdependence tradition to identify an interpersonal process termed the Michelangelo phenomenon. The Michelangelo phenomenon describes the means by which the self is shaped by a close partner's perceptions and behavior. Specifically, self movement toward the ideal self is described as a product of partner affirmation, or the degree to which a partner's perceptions of the self and behavior toward the self are congruent with the self's ideal. The results of 4 studies revealed strong associations between perceived partner affirmation and self movement toward the ideal self, using a variety of participant populations and measurement methods. In addition, perceived partner affirmation--particularly perceived partner behavioral affirmation--was strongly associated with quality of couple functioning and stability in ongoing relationships.", "title": "" }, { "docid": "b4ac5df370c0df5fdb3150afffd9158b", "text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.", "title": "" }, { "docid": "39e38d7825ff7a74e6bbf9975826ddea", "text": "Online display advertising has becomes a billion-dollar industry, and it keeps growing. Advertisers attempt to send marketing messages to attract potential customers via graphic banner ads on publishers’ webpages. Advertisers are charged for each view of a page that delivers their display ads. However, recent studies have discovered that more than half of the ads are never shown on users’ screens due to insufficient scrolling. Thus, advertisers waste a great amount of money on these ads that do not bring any return on investment. Given this situation, the Interactive Advertising Bureau calls for a shift toward charging by viewable impression, i.e., charge for ads that are viewed by users. With this new pricing model, it is helpful to predict the viewability of an ad. This paper proposes two probabilistic latent class models (PLC) that predict the viewability of any given scroll depth for a user-page pair. Using a real-life dataset from a large publisher, the experiments demonstrate that our models outperform comparison systems.", "title": "" }, { "docid": "8ed6c9e82c777aa092a78959391a37b2", "text": "The trie data structure has many properties which make it especially attractive for representing large files of data. These properties include fast retrieval time, quick unsuccessful search determination, and finding the longest match to a given identifier. The main drawback is the space requirement. In this paper the concept of trie compaction is formalized. An exact algorithm for optimal trie compaction and three algorithms for approximate trie compaction are given, and an analysis of the three algorithms is done. The analysis indicate that for actual tries, reductions of around 70 percent in the space required by the uncompacted trie can be expected. The quality of the compaction is shown to be insensitive to the number of nodes, while a more relevant parameter is the alphabet size of the key.", "title": "" }, { "docid": "4e2b0b82a6f7e342f10d1a66795e57f6", "text": "A fully electrical startup boost converter is presented in this paper. With a three-stage stepping-up architecture, the proposed circuit is capable of performing thermoelectric energy harvesting at an input voltage as low as 50 mV. Due to the zero-current-switching (ZCS) operation of the boost converter and automatic shutdown of the low-voltage starter and the auxiliary converter, conversion efficiency up to 73% is demonstrated. The boost converter does not require bulky transformers or mechanical switches for kick-start, making it very attractive for body area sensor network applications.", "title": "" }, { "docid": "51fbebff61232e46381b243023c35dc5", "text": "In this paper, mechanical design of a novel spherical wheel shape for a omni-directional mobile robot is presented. The wheel is used in a omnidirectional mobile robot realizing high step-climbing capability with its hemispherical wheel. Conventional Omniwheels can realize omnidirectional motion, however they have a poor step overcoming ability due to the sub-wheel small size. The proposed design solves this drawback by means of a 4 wheeled design. \"Omni-Ball\" is formed by two passive rotational hemispherical wheels and one active rotational axis. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Omnidirectional vehicle with this proposed Omni-Ball mechanism was confirmed. An prototype has been developed to illustrate the concept. Motion experiments, with a test vehicle are also presented.", "title": "" }, { "docid": "c1c177ee96a0da0a4bbc6749364a14e5", "text": "Knowledge graphs are used to represent relational information in terms of triples. To enable learning about domains, embedding models, such as tensor factorization models, can be used to make predictions of new triples. Often there is background taxonomic information (in terms of subclasses and subproperties) that should also be taken into account. We show that existing fully expressive (a.k.a. universal) models cannot provably respect subclass and subproperty information. We show that minimal modifications to an existing knowledge graph completion method enables injection of taxonomic information. Moreover, we prove that our model is fully expressive, assuming a lower-bound on the size of the embeddings. Experimental results on public knowledge graphs show that despite its simplicity our approach is surprisingly effective. The AI community has long noticed the importance of structure in data. While traditional machine learning techniques have been mostly focused on feature-based representations, the primary form of data in the subfield of Statistical Relational AI (STARAI) (Getoor and Taskar, 2007; Raedt et al., 2016) is in the form of entities and relationships among them. Such entity-relationships are often in the form of (head, relationship, tail) triples, which can also be expressed in the form of a graph, with nodes as entities and labeled directed edges as relationships among entities. Predicting the existence, identity, and attributes of entities and their relationships are among the main goals of StaRAI. Knowledge Graphs (KGs) are graph structured knowledge bases that store facts about the world. A large number of KGs have been created such as NELL (Carlson et al., 2010), FREEBASE (Bollacker et al., 2008), and Google Knowledge Vault (Dong et al., 2014). These KGs have applications in several fields including natural language processing, search, automatic question answering and recommendation systems. Since accessing and storing all the facts in the world is difficult, KGs are incomplete. The goal of link prediction for KGs – a.k.a. KG completion – is to predict the unknown links or relationships in a KG based on the existing ones. This often amounts to infer (the probability of) new triples from the existing triples. Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A common approach to apply machine learning to symbolic data, such as text, graph and entity-relationships, is through embeddings. Word, sentence and paragraph embeddings (Mikolov et al., 2013; Pennington, Socher, and Manning, 2014), which vectorize words, sentences and paragraphs using context information, are widely used in a variety of natural language processing tasks from syntactic parsing to sentiment analysis. Graph embeddings (Hoff, Raftery, and Handcock, 2002; Grover and Leskovec, 2016; Perozzi, Al-Rfou, and Skiena, 2014) are used in social network analysis for link prediction and community detection. In relational learning, embeddings for entities and relationships are used to generalize from existing data. These embeddings are often formulated in terms of tensor factorization (Nickel, Tresp, and Kriegel, 2012; Bordes et al., 2013; Trouillon et al., 2016; Kazemi and Poole, 2018c). Here, the embeddings are learned such that their interaction through (tensor-)products best predicts the (probability of the) existence of the observed triples; see (Nguyen, 2017; Wang et al., 2017) for details and discussion. Tensor factorization methods have been very successful, yet they rely on a large number of annotated triples to learn useful representations. There is often other information in ontologies which specifies the meaning of the symbols used in a knowledge base. One type of ontological information is represented in a hierarchical structure called a taxonomy. For example, a knowledge base might contain information that DJTrump, whose name is “Donald Trump” is a president, but may not contain information that he is a person, a mammal and an animal, because these are implied by taxonomic knowledge. Being told that mammals are chordates, lets us conclude that DJTrump is also a chordate, without needing to have triples specifying this about multiple mammals. We could also have information about subproperties, such as that being president is a subproperty of “managing”, which in turn is a subproperty of “interacts with”. This paper is about combining taxonomic information in the form of subclass and subproperty (e.g., managing implies interaction) into relational embedding models. We show that existing factorization models that are fully expressive cannot reflect such constraints for all legal entity embeddings. We propose a model that is provably fully expressive and can represent such taxonomic information, and evaluate its performance on real-world datasets. ar X iv :1 81 2. 03 23 5v 1 [ cs .L G ] 7 D ec 2 01 8 Factorization and Embedding Let E represent the set of entities and R represent the set of relations. Let W be a set of triples (h, r, t) that are true in the world, where h, t ∈ E are head and tail, and r ∈ R is the relation in the triple. We use W to represent the triples that are false – i.e., W ≐ {(h, r, t) ∈ E × R × E ∣ (h, r, t) ∉ W}. An example of a triple in W can be (Paris, CapitalCityOfCountry, France) and an example of a triple in W can be (Paris, CapitalCityOfCountry, Germany). A KG K ⊆ W is a subset of all the facts. The problem of the KG completion is to infer W from its subset KG. There exists a variety of methods for KG completion. Here, we consider embedding methods and in particular using tensor-factorization. For a broader review of the existing KG completion that can use background information see Related Work. Embeddings: An embedding is a function from an entity or a relation to a vector (or sometimes higher order tensors) over a field. We use bold lower-case for vectors – that is s ∈ R is an embedding of an entity and r ∈ R is an embedding of a relation. Taxonomies: It is common to have structure over the symbols used in the triples, see (e.g., Shoham, 2016). The Ontology Web Language (OWL) (Hitzler et al., 2012) defines (among many other meta-relations) subproperties and subclasses, where p1 is a subproperty of p2 if ∀x, y ∶ (x, p1, y)→ (x, p2, y), that is whenever p1 is true, p2 is also true. Classes can be defined either as a set with a class assertion (often called “type”) between an entity and a class, e.g., saying x is in class C using (x, type,C) or in terms of the characteristic function of the class, a function that is true of element of the class. If c is the characteristic function of class C, then x is in class c is written (x, c, true). For representations that treat entities and properties symmetrically, the two ways to define classes are essentially the same. C1 is a subclass of C2 if every entity in class C1 is in class C2, that is, ∀x ∶ (x, type,C1) → (x, type,C2) or ∀x ∶ (x, c1, true) → (x, c2, true) . If we treat true as an entity, then subclass can be seen as a special case of subproperty. For the rest of the paper we will refer to subsumption in terms of subproperty (and so also of subclass). A non-trivial subsumption is one which is not symmetric; p1 is a subproperty of p2 and there is some relations that is true of p1 that is not true of p2. We want the subsumption to be over all possible entities; those entities that have a legal embedding according to the representation used, not just those we know exist. Let E∗ be the set of all possible entities with a legal embedding according to the representation used. Tensor factorization: For KG completion a tensor factorization defines a function μ ∶ R ×R ×R → [0,1] that takes the embeddings h, r and t of a triple (h, r, t) as input, and generates a prediction, e.g., a probability, of the triple being true (h, r, t) ∈ W . In particular, μ is often a nonlinearity applied to a multi-linear function of h, r, t. The family of methods that we study uses the following multilinear form: Let x, y, and z be vectors of length k. Define ⟨x,y,z⟩ to be the sum of their element-wise product, namely", "title": "" }, { "docid": "7b6c93b9e787ab0ba512cc8aaff185af", "text": "INTRODUCTION The field of second (or foreign) language teaching has undergone many fluctuations and dramatic shifts over the years. As opposed to physics or chemistry, where progress is more or less steady until a major discovery causes a radical theoretical revision (Kuhn, 1970), language teaching is a field where fads and heroes have come and gone in a manner fairly consistent with the kinds of changes that occur in youth culture. I believe that one reason for the frequent changes that have been taking place until recently is the fact that very few language teachers have even the vaguest sense of history about their profession and are unclear concerning the historical bases of the many methodological options they currently have at their disposal. It is hoped that this brief and necessarily oversimplified survey will encourage many language teachers to learn more about the origins of their profession. Such knowledge will give some healthy perspective in evaluating the socalled innovations or new approaches to methodology that will continue to emerge over time.", "title": "" }, { "docid": "78c477aeb6a27cf5b4de028c0ecd7b43", "text": "This paper addresses the problem of speaker clustering in telephone conversations. Recently, a new clustering algorithm named affinity propagation (AP) is proposed. It exhibits fast execution speed and finds clusters with low error. However, AP is an unsupervised approach which may make the resulting number of clusters different from the actual one. This deteriorates the speaker purity dramatically. This paper proposes a modified method named supervised affinity propagation (SAP), which automatically reruns the AP procedure to make the final number of clusters converge to the specified number. Experiments are carried out to compare SAP with traditional k-means and agglomerative hierarchical clustering on 4-hour summed channel conversations in the NIST 2004 Speaker Recognition Evaluation. Experiment results show that the SAP method leads to a noticeable speaker purity improvement with slight cluster purity decrease compared with AP.", "title": "" }, { "docid": "c84a0f630b4fb2e547451d904e1c63a5", "text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.", "title": "" }, { "docid": "8b2d6ce5158c94f2e21ff4ebd54af2b5", "text": "Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.", "title": "" }, { "docid": "864adf6f82a0d1af98339f92035b15fc", "text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.", "title": "" }, { "docid": "205a44a35cc1af14f2b40424cc2654bc", "text": "This paper focuses on human-pose estimation using a stationary depth sensor. The main challenge concerns reducing the feature ambiguity and modeling human poses in high-dimensional human-pose space because of the curse of dimensionality. We propose a 3-D-point-cloud system that captures the geometric properties (orientation and shape) of the 3-D point cloud of a human to reduce the feature ambiguity, and use the result from action classification to discover low-dimensional manifolds in human-pose space in estimating the underlying probability distribution of human poses. In the proposed system, a 3-D-point-cloud feature called viewpoint and shape feature histogram (VISH) is proposed to extract the 3-D points from a human and arrange them into a tree structure that preserves the global and local properties of the 3-D points. A nonparametric action-mixture model (AMM) is then proposed to model human poses using low-dimensional manifolds based on the concept of distributed representation. Since human poses estimated using the proposed AMM are in discrete space, a kinematic model is added in the last stage of the proposed system to model the spatial relationship of body parts in continuous space to reduce the quantization error in the AMM. The proposed system has been trained and evaluated on a benchmark dataset. Computer-simulation results showed that the overall error and standard deviation of the proposed 3-D-point-cloud system were reduced compared with some existing approaches without action classification.", "title": "" }, { "docid": "63d19f75bc0baee93404488a1d307a32", "text": "Mitochondria can unfold importing precursor proteins by unraveling them from their N-termini. However, how this unraveling is induced is not known. Two candidates for the unfolding activity are the electrical potential across the inner mitochondrial membrane and mitochondrial Hsp70 in the matrix. Here, we propose that many precursors are unfolded by the electrical potential acting directly on positively charged amino acid side chains in the targeting sequences. Only precursor proteins with targeting sequences that are long enough to reach the matrix at the initial interaction with the import machinery are unfolded by mitochondrial Hsp70, and this unfolding occurs even in the absence of a membrane potential.", "title": "" }, { "docid": "cd55fc3fafe2618f743a845d89c3a796", "text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a", "title": "" }, { "docid": "8ddb7c62f032fb07116e7847e69b51d1", "text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools", "title": "" }, { "docid": "32b96d4d23a03b1828f71496e017193e", "text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.", "title": "" } ]
scidocsrr
3489d1d49350cc9ce296c29ba1c5d1cf
Economics of Internet of Things (IoT): An Information Market Approach
[ { "docid": "24a164e7d6392b052f8a36e20e9c4f69", "text": "The initial vision of the Internet of Things was of a world in which all physical objects are tagged and uniquely identified by RFID transponders. However, the concept has grown into multiple dimensions, encompassing sensor networks able to provide real-world intelligence and goal-oriented collaboration of distributed smart objects via local networks or global interconnections such as the Internet. Despite significant technological advances, difficulties associated with the evaluation of IoT solutions under realistic conditions in real-world experimental deployments still hamper their maturation and significant rollout. In this article we identify requirements for the next generation of IoT experimental facilities. While providing a taxonomy, we also survey currently available research testbeds, identify existing gaps, and suggest new directions based on experience from recent efforts in this field.", "title": "" } ]
[ { "docid": "1885ee33c09d943736b03895f41cea06", "text": "Since the late 1990s, there has been a burst of research on robotic devices for poststroke rehabilitation. Robot-mediated therapy produced improvements on recovery of motor capacity; however, so far, the use of robots has not shown qualitative benefit over classical therapist-led training sessions, performed on the same quantity of movements. Multidegree-of-freedom robots, like the modern upper-limb exoskeletons, enable a distributed interaction on the whole assisted limb and can exploit a large amount of sensory feedback data, potentially providing new capabilities within standard rehabilitation sessions. Surprisingly, most publications in the field of exoskeletons focused only on mechatronic design of the devices, while little details were given to the control aspects. On the contrary, we believe a paramount aspect for robots potentiality lies on the control side. Therefore, the aim of this review is to provide a taxonomy of currently available control strategies for exoskeletons for neurorehabilitation, in order to formulate appropriate questions toward the development of innovative and improved control strategies.", "title": "" }, { "docid": "2683c65d587e8febe45296f1c124e04d", "text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.", "title": "" }, { "docid": "48096a9a7948a3842afc082fa6e223a6", "text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.", "title": "" }, { "docid": "9694bc859dd5295c40d36230cf6fd1b9", "text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".", "title": "" }, { "docid": "c0dbd6356ead3a9542c9ec20dd781cc7", "text": "This paper aims to address the importance of supportive teacher–student interactions within the learning environment. This will be explored through the three elements of the NSW Quality Teaching Model; Intellectual Quality, Quality Learning Environment and Significance. The paper will further observe the influences of gender on the teacher–student relationship, as well as the impact that this relationship has on student academic outcomes and behaviour. Teacher–student relationships have been found to have immeasurable effects on students’ learning and their schooling experience. This paper examines the ways in which educators should plan to improve their interactions with students, in order to allow for quality learning. This journal article is available in Journal of Student Engagement: Education Matters: http://ro.uow.edu.au/jseem/vol2/iss1/2 Journal of Student Engagement: Education matters 2012, 2 (1), 2–9 Lauren Liberante 2 The importance of teacher–student relationships, as explored through the lens of the NSW Quality Teaching Model", "title": "" }, { "docid": "bb482edabdb07f412ca13a728b7fd25c", "text": "This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. We train the cuboid model jointly and discriminatively. In inference we slide and rotate the box in 3D to score the object hypotheses. We evaluate our approach in indoor and outdoor scenarios, and show that our approach outperforms the state-of-the-art in both 2D [1] and 3D object detection [3].", "title": "" }, { "docid": "dd51cc2138760f1dcdce6e150cabda19", "text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.", "title": "" }, { "docid": "a839016be99c3cb93d30fa48403086d8", "text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.", "title": "" }, { "docid": "342bcd2509b632480c4f4e8059cfa6a1", "text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.", "title": "" }, { "docid": "abbafaaf6a93e2a49a692690d4107c9a", "text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the group’s network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.", "title": "" }, { "docid": "3d7eb095e68a9500674493ee58418789", "text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital", "title": "" }, { "docid": "763983ae894e3b98932233ef0b465164", "text": "In the rapidly developing world of information technology, computers have been used in various settings for clinical medicine application. Studies have focused on computerized physician order entry (CPOE) system interface design and functional development to achieve a successful technology adoption process. Therefore, the purpose of this study was to evaluate physician satisfaction with the CPOE system. This survey included user attitude toward interface design, operation functions/usage effectiveness, interface usability, and user satisfaction. We used questionnaires for data collection from June to August 2008, and 225 valid questionnaires were returned with a response rate of 84.5 %. Canonical correlation was applied to explore the relationship of personal attributes and usability with user satisfaction. The results of the data analysis revealed that certain demographic groups showed higher acceptance and satisfaction levels, especially residents, those with less pressure when using computers or those with less experience with the CPOE systems. Additionally, computer use pressure and usability were the best predictors of user satisfaction. Based on the study results, it is suggested that future CPOE development should focus on interface design and content links, as well as providing educational training programs for the new users; since a learning curve period should be considered as an indespensible factor for CPOE adoption.", "title": "" }, { "docid": "f94ff39136c71cf2a36253381a042195", "text": "We present Autonomous Rssi based RElative poSitioning and Tracking (ARREST), a new robotic sensing system for tracking and following a moving, RF-emitting object, which we refer to as the Leader, solely based on signal strength information. Our proposed tracking agent, which we refer to as the TrackBot, uses a single rotating, off-the-shelf, directional antenna, novel angle and relative speed estimation algorithms, and Kalman filtering to continually estimate the relative position of the Leader with decimeter level accuracy (which is comparable to a state-of-the-art multiple access point based RF-localization system) and the relative speed of the Leader with accuracy on the order of 1 m/s. The TrackBot feeds the relative position and speed estimates into a Linear Quadratic Gaussian (LQG) controller to generate a set of control outputs to control the orientation and the movement of the TrackBot. We perform an extensive set of real world experiments with a full-fledged prototype to demonstrate that the TrackBot is able to stay within 5m of the Leader with: (1) more than 99% probability in line of sight scenarios, and (2) more than 75% probability in no line of sight scenarios, when it moves 1.8X faster than the Leader.", "title": "" }, { "docid": "e14b936ecee52765078d77088e76e643", "text": "In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.", "title": "" }, { "docid": "50d63f05e453468f8e5234910e3d86d1", "text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: gordon.ross03@ic.ac.uk, gr203@i ic.ac.uk (N.M. Adams), d.tasoulis@ic.ac.uk (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.", "title": "" }, { "docid": "bbad2fa7a85b7f90d9589adee78a08d7", "text": "Haze has becoming a yearly occurrence in Malaysia. There exist three dimensionsto the problems associated with air pollution: public ignorance on quality of air, impact of air pollution towards health, and difficulty in obtaining information related to air pollution. This research aims to analyse and visually identify areas and associated level of air pollutant. This study applies the air pollutant index (API) data retrieved from Malaysia Department of Environment (DOE) and Geographic Information System (GIS) via Inverse Distance Weighted (IDW) interpolation methodin ArcGIS 10.1 software to enable haze monitoring visualisation. In this research, study area is narrowed to five major cities in Selangor, Malaysia.", "title": "" }, { "docid": "78ce9ddb8fbfeb801455a76a3a6b0af2", "text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.", "title": "" }, { "docid": "72e6d897e8852fca481d39237cf04e36", "text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.", "title": "" }, { "docid": "3d23e7b9d8c0e1a3b4916c069bf6f7d6", "text": "In recent years, depth cameras have become a widely available sensor type that captures depth images at real-time frame rates. Even though recent approaches have shown that 3D pose estimation from monocular 2.5D depth images has become feasible, there are still challenging problems due to strong noise in the depth data and self-occlusions in the motions being captured. In this paper, we present an efficient and robust pose estimation framework for tracking full-body motions from a single depth image stream. Following a data-driven hybrid strategy that combines local optimization with global retrieval techniques, we contribute several technical improvements that lead to speed-ups of an order of magnitude compared to previous approaches. In particular, we introduce a variant of Dijkstra's algorithm to efficiently extract pose features from the depth data and describe a novel late-fusion scheme based on an efficiently computable sparse Hausdorff distance to combine local and global pose estimates. Our experiments show that the combination of these techniques facilitates real-time tracking with stable results even for fast and complex motions, making it applicable to a wide range of inter-active scenarios.", "title": "" }, { "docid": "b18d03e17f05cb0a2bb7a852a53df8cc", "text": "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.", "title": "" } ]
scidocsrr
513ecae3dde0ac74c17e01d0aad02629
Automatic program repair with evolutionary computation
[ { "docid": "c15492fea3db1af99bc8a04bdff71fdc", "text": "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.", "title": "" }, { "docid": "552545ea9de47c26e1626efc4a0f201e", "text": "For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the \"alphabet\" used to describe those systems.", "title": "" }, { "docid": "f8742208fef05beb86d77f1d5b5d25ef", "text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title", "title": "" }, { "docid": "2b471e61a6b95221d9ca9c740660a726", "text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.", "title": "" } ]
[ { "docid": "11a28e11ba6e7352713b8ee63291cd9c", "text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.", "title": "" }, { "docid": "054b5be56ae07c58b846cf59667734fc", "text": "Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc. Such systems use a large number of cameras to triangulate the position of optical markers. The marker positions are estimated with high accuracy. However, especially when tracking articulated bodies, a fraction of the markers in each timestep is missing from the reconstruction. In this paper, we propose to use a neural network approach to learn how human motion is temporally and spatially correlated, and reconstruct missing markers positions through this model. We experiment with two different models, one LSTM-based and one time-window-based. Both methods produce state-of-the-art results, while working online, as opposed to most of the alternative methods, which require the complete sequence to be known. The implementation is publicly available at https://github.com/Svitozar/NN-for-Missing-Marker-Reconstruction.", "title": "" }, { "docid": "0f25a4cd8a0a94f6666caadb6d4be3d3", "text": "The tradeoff between the switching energy and electro-thermal robustness is explored for 1.2-kV SiC MOSFET, silicon power MOSFET, and 900-V CoolMOS body diodes at different temperatures. The maximum forward current for dynamic avalanche breakdown is decreased with increasing supply voltage and temperature for all technologies. The CoolMOS exhibited the largest latch-up current followed by the SiC MOSFET and silicon power MOSFET; however, when expressed as current density, the SiC MOSFET comes first followed by the CoolMOS and silicon power MOSFET. For the CoolMOS, the alternating p and n pillars of the superjunctions in the drift region suppress BJT latch-up during reverse recovery by minimizing lateral currents and providing low-resistance paths for carriers. Hence, the temperature dependence of the latch-up current for CoolMOS was the lowest. The switching energy of the CoolMOS body diode is the largest because of its superjunction architecture which means the drift region have higher doping, hence more reverse charge. In spite of having a higher thermal resistance, the SiC MOSFET has approximately the same latch-up current while exhibiting the lowest switching energy because of the least reverse charge. The silicon power MOSFET exhibits intermediate performance on switching energy with lowest dynamic latching current.", "title": "" }, { "docid": "c02fb121399e1ed82458fb62179d2560", "text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.", "title": "" }, { "docid": "44e7ba0be5275047587e9afd22f1de2a", "text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.", "title": "" }, { "docid": "42bc10578e76a0d006ee5d11484b1488", "text": "In this paper, we present a wrapper-based acoustic group feature selection system for the INTERSPEECH 2015 Computational Paralinguistics Challenge (ComParE) 2015, Eating Condition (EC) Sub-challenge. The wrapper-based method has two components: the feature subset evaluation and the feature space search. The feature subset evaluation is performed using Support Vector Machine (SVM) classifiers. The wrapper method combined with complex algorithms such as SVM is computationally intensive. To address this, the feature space search uses Best Incremental Ranked Subset (BIRS), a fast and efficient algorithm. Moreover, we investigate considering the feature space in meaningful groups rather than individually. The acoustic feature space is partitioned into groups with each group representing a Low Level Descriptor (LLD). This partitioning reduces the time complexity of the search algorithm and makes the problem more tractable while attempting to gain insight into the relevant acoustic feature groups. Our wrapper-based system achieves improvement over the challenge baseline on the EC Sub-challenge test set using a variant of BIRS algorithm and LLD groups.", "title": "" }, { "docid": "9f32b1e95e163c96ebccb2596a2edb8d", "text": "This paper is devoted to the control of a cable driven redundant parallel manipulator, which is a challenging problem due the optimal resolution of its inherent redundancy. Additionally to complicated forward kinematics, having a wide workspace makes it difficult to directly measure the pose of the end-effector. The goal of the controller is trajectory tracking in a large and singular free workspace, and to guarantee that the cables are always under tension. A control topology is proposed in this paper which is capable to fulfill the stringent positioning requirements for these type of manipulators. Closed-loop performance of various control topologies are compared by simulation of the closed-loop dynamics of the KNTU CDRPM, while the equations of parallel manipulator dynamics are implicit in structure and only special integration routines can be used for their integration. It is shown that the proposed joint space controller is capable to satisfy the required tracking performance, despite the inherent limitation of task space pose measurement.", "title": "" }, { "docid": "4bf6c59cdd91d60cf6802ae99d84c700", "text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.", "title": "" }, { "docid": "19c5d5563e41fac1fd29833662ad0b6c", "text": "This paper discusses our contribution to the third RTE Challenge – the SALSA RTE system. It builds on an earlier system based on a relatively deep linguistic analysis, which we complement with a shallow component based on word overlap. We evaluate their (combined) performance on various data sets. However, earlier observations that the combination of features improves the overall accuracy could be replicated only partly.", "title": "" }, { "docid": "17cc2f4ae2286d36748b203492d406e6", "text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.", "title": "" }, { "docid": "04644fb390a5d3690295551491f63167", "text": "Massive graphs, such as online social networks and communication networks, have become common today. To efficiently analyze such large graphs, many distributed graph computing systems have been developed. These systems employ the \"think like a vertex\" programming paradigm, where a program proceeds in iterations and at each iteration, vertices exchange messages with each other. However, using Pregel's simple message passing mechanism, some vertices may send/receive significantly more messages than others due to either the high degree of these vertices or the logic of the algorithm used. This forms the communication bottleneck and leads to imbalanced workload among machines in the cluster. In this paper, we propose two effective message reduction techniques: (1)vertex mirroring with message combining, and (2)an additional request-respond API. These techniques not only reduce the total number of messages exchanged through the network, but also bound the number of messages sent/received by any single vertex. We theoretically analyze the effectiveness of our techniques, and implement them on top of our open-source Pregel implementation called Pregel+. Our experiments on various large real graphs demonstrate that our message reduction techniques significantly improve the performance of distributed graph computation.", "title": "" }, { "docid": "ce3cd1edffb0754e55658daaafe18df6", "text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn", "title": "" }, { "docid": "b49698c3df4e432285448103cda7f2dd", "text": "Acoustic emission (AE)-signal-based techniques have recently been attracting researchers' attention to rotational machine health monitoring and diagnostics due to the advantages of the AE signals over the extensively used vibration signals. Unlike vibration-based methods, the AE-based techniques are in their infant stage of development. From the perspective of machine health monitoring and fault detection, developing an AE-based methodology is important. In this paper, a methodology for rotational machine health monitoring and fault detection using empirical mode decomposition (EMD)-based AE feature quantification is presented. The methodology incorporates a threshold-based denoising technique into EMD to increase the signal-to-noise ratio of the AE bursts. Multiple features are extracted from the denoised signals and then fused into a single compressed AE feature. The compressed AE features are then used for fault detection based on a statistical method. A gear fault detection case study is conducted on a notional split-torque gearbox using AE signals to demonstrate the effectiveness of the methodology. A fault detection performance comparison using the compressed AE features with the existing EMD-based AE features reported in the literature is also conducted.", "title": "" }, { "docid": "7e08ddffc3a04c6dac886e14b7e93907", "text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.", "title": "" }, { "docid": "43184dfe77050618402900bc309203d5", "text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.", "title": "" }, { "docid": "2488c17b39dd3904e2f17448a8519817", "text": "Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. Functional magnetic resonance imaging confirmed previously that people who used spatial memory strategies showed increased activity in the hippocampus, whereas response strategies were associated with activity in the caudate nucleus. Here, voxel based morphometry was used to identify brain regions covarying with the navigational strategies used by individuals. Results showed that spatial learners had significantly more gray matter in the hippocampus and less gray matter in the caudate nucleus compared with response learners. Furthermore, the gray matter in the hippocampus was negatively correlated to the gray matter in the caudate nucleus, suggesting a competitive interaction between these two brain areas. In a second analysis, the gray matter of regions known to be anatomically connected to the hippocampus, such as the amygdala, parahippocampal, perirhinal, entorhinal and orbitofrontal cortices were shown to covary with gray matter in the hippocampus. Because low gray matter in the hippocampus is a risk factor for Alzheimer's disease, these results have important implications for intervention programs that aim at functional recovery in these brain areas. In addition, these data suggest that spatial strategies may provide protective effects against degeneration of the hippocampus that occurs with normal aging.", "title": "" }, { "docid": "570eca9884edb7e4a03ed95763be20aa", "text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.", "title": "" }, { "docid": "b23d7f18a7abcaa6d3984ef7ca0609e0", "text": "FFT algorithm is the popular software design for spectrum analyzer, but doesnpsilat work well for parallel hardware system due to complex calculation and huge memory requirement. Observing the key components of a spectrum analyzer are the intensities for respective frequencies, we propose a Goertzel algorithm to directly extract the intensity factors for respective frequency components in the input signal. Goertzel algorithm dispenses with the memory for z-1 and z-2 processing, and only needs two multipliers and three adders for real number calculation. In this paper, we present the spectrum extraction algorithm and implement a spectrum extractor with high speed and low area consumption in a FPGA (field programmable gate array) chip. It proves the feasibility of implementing a handheld concurrent multi-channel real-time spectrum analysis IP into a low gate counts and low power consumption CPLD (complex programmable logic device) chip.", "title": "" }, { "docid": "1a4d07d9a48668f7fa3bcf301c25f7f2", "text": "A novel low-loss planar dielectric waveguide is proposed. It is based on a high-permittivity dielectric slab parallel to a metal ground. The guiding channel is limited at the sides by a number of air holes which are lowering the effective permittivity. A mode with the electric field primarily parallel to the ground plane is used, similar to the E11x mode of an insulated image guide. A rather thick gap layer between the ground and the high-permittivity slab makes this mode to show the highest effective permittivity. The paper discusses the mode dispersion behaviour and presents measured characteristics of a power divider circuit operating at a frequency of about 8 GHz. Low leakage of about 14% is observed at the discontinuities forming the power divider. Using a compact dipole antenna structure, excitation efficiency of more than 90% is obtained.", "title": "" }, { "docid": "3fc2ec702c66501de0eea9f5f0cac511", "text": "Emotional eating is a change in consumption of food in response to emotional stimuli, and has been linked in negative physical and psychological outcomes. Observers have noticed over the years a correlation between emotions, mood and food choice, in ways that vary from strong and overt to subtle and subconscious. Specific moods such as anger, fear, sadness and joy have been found to affect eating responses and eating itself can play a role in influencing one’s emotions. With such an obvious link between emotions and eating behavior, the research over the years continues to delve further into the phenomenon. This includes investigating individuals of different weight categories, as well as children, adolescents and parenting styles. EXPLORING THE ASSOCIATION BETWEEN EMOTIONS AND EATING BEHAVIOR v", "title": "" } ]
scidocsrr
90de74b88910549d837e827ce6061567
ALL OUR SONS: THE DEVELOPMENTAL NEUROBIOLOGY AND NEUROENDOCRINOLOGY OF BOYS AT RISK.
[ { "docid": "7340866fa3965558e1571bcc5294b896", "text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.", "title": "" } ]
[ { "docid": "e189f36ba0fcb91d0608d0651c60516e", "text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)", "title": "" }, { "docid": "b99c42f412408610e1bfd414f4ea6b9f", "text": "ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.", "title": "" }, { "docid": "d81282c41c609b980442f481d0a7fa3d", "text": "Some of the recent applications in the field of the power supplies use multiphase converters to achieve fast dynamic response, smaller input/output filters, or better packaging. Typically, these converters have several paralleled power stages, with a current loop in each phase and a single voltage loop. The presence of the current loops avoids current imbalance among phases. The purpose of this paper is to demonstrate that, in CCM, with a proper design, there is an intrinsic mechanism of self-balance that reduces the current imbalance. Thus, in the buck converter, if natural zero-voltage switching (ZVS) is achieved in both transitions, the instantaneous inductor current compensates partially the different DC currents through the phases. The need for using n current loops will be finally determined by the application but not by the converter itself. Using the buck converter as a base, a multiphase converter has been developed. Several tests have been carried out in the laboratory and the results show clearly that, when the conditions are met, the phase currents are very well balanced even during transient conditions.", "title": "" }, { "docid": "f752d156cc1c606e5b06cf99a90b2a49", "text": "We study the relationship between Facebook popularity (number of contacts) and personality traits on a large number of subjects. We test to which extent two prevalent viewpoints hold. That is, popular users (those with many social contacts) are the ones whose personality traits either predict many offline (real world) friends or predict propensity to maintain superficial relationships. We find that the predictor for number of friends in the real world (Extraversion) is also a predictor for number of Facebook contacts. We then test whether people who have many social contacts on Facebook are the ones who are able to adapt themselves to new forms of communication, present themselves in likable ways, and have propensity to maintain superficial relationships. We show that there is no statistical evidence to support such a conjecture.", "title": "" }, { "docid": "1158e01718dd8eed415dd5b3513f4e30", "text": "Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.", "title": "" }, { "docid": "993590032de592f4bb69d9c906ff76a8", "text": "The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a", "title": "" }, { "docid": "4a69a0c5c225d9fbb40373aebaeb99be", "text": "The hyperlink structure of Wikipedia constitutes a key resource for many Natural Language Processing tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper we present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result we obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and named entities. We then show that our corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.", "title": "" }, { "docid": "90e5eaa383c00a0551a5161f07c683e7", "text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.", "title": "" }, { "docid": "022a2f42669fdb337cfb4646fed9eb09", "text": "A mobile agent with the task to classify its sensor pattern has to cope with ambiguous information. Active recognition of three-dimensional objects involves the observer in a search for discriminative evidence, e.g., by change of its viewpoint. This paper defines the recognition process as a sequential decision problem with the objective to disambiguate initial object hypotheses. Reinforcement learning provides then an efficient method to autonomously develop near-optimal decision strategies in terms of sensorimotor mappings. The proposed system learns object models from visual appearance and uses a radial basis function (RBF) network for a probabilistic interpretation of the two-dimensional views. The information gain in fusing successive object hypotheses provides a utility measure to reinforce actions leading to discriminative viewpoints. The system is verified in experiments with 16 objects and two degrees of freedom in sensor motion. Crucial improvements in performance are gained using the learned in contrast to random camera placements. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "1d8f11b742dd810f228b80747ec2a0bd", "text": "The particle swarm optimization algorithm was showed to converge rapidly during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, the gradient descending method can achieve faster convergent speed around global optimum, and at the same time, the convergent accuracy can be higher. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with back-propagation (BP) algorithm, also referred to as PSO–BP algorithm, is proposed to train the weights of feedforward neural network (FNN), the hybrid algorithm can make use of not only strong global searching ability of the PSOA, but also strong local searching ability of the BP algorithm. In this paper, a novel selection strategy of the inertial weight is introduced to the PSO algorithm. In the proposed PSO–BP algorithm, we adopt a heuristic way to give a transition from particle swarm search to gradient descending search. In this paper, we also give three kind of encoding strategy of particles, and give the different problem area in which every encoding strategy is used. The experimental results show that the proposed hybrid PSO–BP algorithm is better than the Adaptive Particle swarm optimization algorithm (APSOA) and BP algorithm in convergent speed and convergent accuracy. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a75e29521b04d5e09228918e4ed560a6", "text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.", "title": "" }, { "docid": "88c592bdd7bb9c9348545734a9508b7b", "text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.", "title": "" }, { "docid": "6c7284ca77809210601c213ee8a685bb", "text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.", "title": "" }, { "docid": "86dfbb8dc8682f975ccb3cfce75eac3a", "text": "BACKGROUND\nAlthough many precautions have been introduced into early burn management, post burn contractures are still significant problems in burn patients. In this study, a form of Z-plasty in combination with relaxing incision was used for the correction of contractures.\n\n\nMETHODS\nPreoperatively, a Z-advancement rotation flap combined with a relaxing incision was drawn on the contracture line. Relaxing incision created a skin defect like a rhomboid. Afterwards, both limbs of the Z flap were incised. After preparation of the flaps, advancement and rotation were made in order to cover the rhomboid defect. Besides subcutaneous tissue, skin edges were closely approximated with sutures.\n\n\nRESULTS\nThis study included sixteen patients treated successfully with this flap. It was used without encountering any major complications such as infection, hematoma, flap loss, suture dehiscence or flap necrosis. All rotated and advanced flaps healed uneventfully. In all but one patient, effective contracture release was achieved by means of using one or two Z-plasty. In one patient suffering severe left upper extremity contracture, a little residual contracture remained due to inadequate release.\n\n\nCONCLUSION\nWhen dealing with this type of Z-plasty for mild contractures, it offers a new option for the correction of post burn contractures, which is safe, simple and effective.", "title": "" }, { "docid": "8760b523ca90dccf7a9a197622bda043", "text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.", "title": "" }, { "docid": "54fc5bc85ef8022d099fff14ab1b7ce0", "text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.", "title": "" }, { "docid": "ddc6a5e9f684fd13aec56dc48969abc2", "text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.", "title": "" }, { "docid": "6ff034e2ff0d54f7e73d23207789898d", "text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.", "title": "" }, { "docid": "dade322206eeab84bfdae7d45fe043ca", "text": "Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules. We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification. Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy. The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.", "title": "" }, { "docid": "7e949c7cd50d1e381f58fe26f9736124", "text": "Mental illness is one of the most undertreated health problems worldwide. Previous work has shown that there are remarkably strong cues to mental illness in short samples of the voice. These cues are evident in severe forms of illness, but it would be most valuable to make earlier diagnoses from a richer feature set. Furthermore there is an abstraction gap between these voice cues and the diagnostic cues used by practitioners. We believe that by closing this gap, we can build more effective early diagnostic systems for mental illness. In order to develop improved monitoring, we need to translate the high-level cues used by practitioners into features that can be analyzed using signal processing and machine learning techniques. In this paper we describe the elicitation process that we used to tap the practitioners' knowledge. We borrow from both AI (expert systems) and HCI (contextual inquiry) fields in order to perform this knowledge transfer. The paper highlights an unusual and promising role for HCI - the analysis of interaction data for health diagnosis.", "title": "" } ]
scidocsrr
452a8dd80aff6209e6f6b9783a8a8340
PReMVOS : Proposal-generation , Refinement and Merging for the DAVIS Challenge on Video Object Segmentation 2018
[ { "docid": "254b82dc2ee6f0d753803c4a90dcd8b7", "text": "Most previous bounding-box-based segmentation methods assume the bounding box tightly covers the object of interest. However it is common that a rectangle input could be too large or too small. In this paper, we propose a novel segmentation approach that uses a rectangle as a soft constraint by transforming it into an Euclidean distance map. A convolutional encoder-decoder network is trained end-to-end by concatenating images with these distance maps as inputs and predicting the object masks as outputs. Our approach gets accurate segmentation results given sloppy rectangles while being general for both interactive segmentation and instance segmentation. We show our network extends to curve-based input without retraining. We further apply our network to instance-level semantic segmentation and resolve any overlap using a conditional random field. Experiments on benchmark datasets demonstrate the effectiveness of the proposed approaches.", "title": "" }, { "docid": "bb404c0e94cde80436d2c5bd331c7816", "text": "Conventional video segmentation methods often rely on temporal continuity to propagate masks. Such an assumption suffers from issues like drifting and inability to handle large displacement. To overcome these issues, we formulate an effective mechanism to prevent the target from being lost via adaptive object re-identification. Specifically, our Video Object Segmentation with Re-identification (VSReID) model includes a mask propagation module and a ReID module. The former module produces an initial probability map by flow warping while the latter module retrieves missing instances by adaptive matching. With these two modules iteratively applied, our VS-ReID records a global mean (Region Jaccard and Boundary F measure) of 0.699, the best performance in 2017 DAVIS Challenge.", "title": "" }, { "docid": "33de1981b2d9a0aa1955602006d09db9", "text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "title": "" } ]
[ { "docid": "597e00855111c6ccb891c96e28f23585", "text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.", "title": "" }, { "docid": "a733ec1769f40b0d7580409ef2705682", "text": "BACKGROUND\nBiomedical data, e.g. from knowledge bases and ontologies, is increasingly made available following open linked data principles, at best as RDF triple data. This is a necessary step towards unified access to biological data sets, but this still requires solutions to query multiple endpoints for their heterogeneous data to eventually retrieve all the meaningful information. Suggested solutions are based on query federation approaches, which require the submission of SPARQL queries to endpoints. Due to the size and complexity of available data, these solutions have to be optimised for efficient retrieval times and for users in life sciences research. Last but not least, over time, the reliability of data resources in terms of access and quality have to be monitored. Our solution (BioFed) federates data over 130 SPARQL endpoints in life sciences and tailors query submission according to the provenance information. BioFed has been evaluated against the state of the art solution FedX and forms an important benchmark for the life science domain.\n\n\nMETHODS\nThe efficient cataloguing approach of the federated query processing system 'BioFed', the triple pattern wise source selection and the semantic source normalisation forms the core to our solution. It gathers and integrates data from newly identified public endpoints for federated access. Basic provenance information is linked to the retrieved data. Last but not least, BioFed makes use of the latest SPARQL standard (i.e., 1.1) to leverage the full benefits for query federation. The evaluation is based on 10 simple and 10 complex queries, which address data in 10 major and very popular data sources (e.g., Dugbank, Sider).\n\n\nRESULTS\nBioFed is a solution for a single-point-of-access for a large number of SPARQL endpoints providing life science data. It facilitates efficient query generation for data access and provides basic provenance information in combination with the retrieved data. BioFed fully supports SPARQL 1.1 and gives access to the endpoint's availability based on the EndpointData graph. Our evaluation of BioFed against FedX is based on 20 heterogeneous federated SPARQL queries and shows competitive execution performance in comparison to FedX, which can be attributed to the provision of provenance information for the source selection.\n\n\nCONCLUSION\nDeveloping and testing federated query engines for life sciences data is still a challenging task. According to our findings, it is advantageous to optimise the source selection. The cataloguing of SPARQL endpoints, including type and property indexing, leads to efficient querying of data resources over the Web of Data. This could even be further improved through the use of ontologies, e.g., for abstract normalisation of query terms.", "title": "" }, { "docid": "e34ad4339934d9b9b4019fad37f8dd4e", "text": "This paper presents a technique for estimating the threedimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.", "title": "" }, { "docid": "c112d026a15e2ace201b12fa8ac98fe6", "text": "Disturbance of mineral status in patients with chronic renal failure (CRF) is one of many complications of this disease. Trace elements analysis in hair is sometime used by clinicians for a diagnosis of mineral status. In the present study concentration of magnesium and other trace elements was determined in serum, erythrocytes, and hair of patients with CRF undergoing hemodialysis (n = 31) and with impaired renal function but non-dialyzed (n = 15). Measurements of mineral content were performed by the atomic absorption spectrometry method (AAS). In serum of hemodialyzed patients as well as in erythrocytes and hair we found significantly increased levels of almost all tested elements, especially for Mg, Al, and Cr, compared to the control group. No significant differences were observed between these groups only in the Cd content in the examined samples. However, a significant correlation between its concentration in serum and erythrocytes was only found in the case of this element. Hair analysis reflected well the changes of mineral distribution in patients with CRF and may be used to diagnose these anomalies, in particular, with regard to Ca, Mg, Fe, and Cr. However, a strong variability of the concentration for these elements was found. In conclusion, our results confirm that renal failure as well as dialysis provoke imbalances of elemental status in physiological fluids and tissues, which should be monitored.", "title": "" }, { "docid": "40e1ead45e4b5328c76ec991a1e8a81b", "text": "This paper presents a game-theoretic and learning approach to security risk management based on a model that captures the diffusion of risk in an organization with multiple technical and business processes. Of particular interest is the way the interdependencies between processes affect the evolution of the organization's risk profile as time progresses, which is first developed as a probabilistic risk framework and then studied within a discrete Markov model. Using zero-sum dynamic Markov games, we analyze the interaction between a malicious adversary whose actions increases the risk level of the organization and a defender agent, e.g. security and risk management division of the organization, which aims to mitigate risks. We derive min-max (saddle point) solutions of this game to obtain the optimal risk management strategies for the organization to achieve a certain level of performance. This methodology also applies to worst-case scenario analysis where the adversary can be interpreted as a nature player in the game. In practice, the parameters of the Markov game may not be known due to the costly nature of collecting and processing information about the adversary as well an organization with many components itself. We apply ideas from Q-learning to analyze the behavior of the agents when little information is known about the environment in which the attacker and defender interact. The framework developed and results obtained are illustrated with a small example scenario and numerical analysis.", "title": "" }, { "docid": "6875d41e412d71f45d6d4ea43697ed80", "text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription", "title": "" }, { "docid": "215dc8ac0f9e30ff4bb7da1cc1996a21", "text": "Social neuroscience benefits from the experimental manipulation of neuronal activity. One possible manipulation, neurofeedback, is an operant conditioning-based technique in which individuals sense, interact with, and manage their own physiological and mental states. Neurofeedback has been applied to a wide variety of psychiatric illnesses, as well as to treat sub-clinical symptoms, and even to enhance performance in healthy populations. Despite growing interest, there persists a level of distrust and/or bias in the medical and research communities in the USA toward neurofeedback and other functional interventions. As a result, neurofeedback has been largely ignored, or disregarded within social neuroscience. We propose a systematic, empirically-based approach for assessing the effectiveness, and utility of neurofeedback. To that end, we use the term perturbative physiologic plasticity to suggest that biological systems function as an integrated whole that can be perturbed and guided, either directly or indirectly, into different physiological states. When the intention is to normalize the system, e.g., via neurofeedback, we describe it as self-directed neuroplasticity, whose outcome is persistent functional, structural, and behavioral changes. We argue that changes in physiological, neuropsychological, behavioral, interpersonal, and societal functioning following neurofeedback can serve as objective indices and as the metrics necessary for assessing levels of efficacy. In this chapter, we examine the effects of neurofeedback on functional connectivity in a few clinical disorders as case studies for this approach. We believe this broader perspective will open new avenues of investigation, especially within social neuroscience, to further elucidate the mechanisms and effectiveness of these types of interventions, and their relevance to basic research.", "title": "" }, { "docid": "57cb465ba54502fd5685f37b37812d71", "text": "Solving logistic regression with L1-regularization in distributed settings is an important problem. This problem arises when training dataset is very large and cannot fit the memory of a single machine. We present d-GLMNET, a new algorithm solving logistic regression with L1-regularization in the distributed settings. We empirically show that it is superior over distributed online learning via truncated gradient.", "title": "" }, { "docid": "4191648ada97ecc5a906468369c12bf4", "text": "Dermoscopy is a widely used technique whose role in the clinical (and preoperative) diagnosis of melanocytic and non-melanocytic skin lesions has been well established in recent years. The aim of this paper is to clarify the correlations between the \"local\" dermoscopic findings in melanoma and the underlying histology, in order to help clinicians in routine practice.", "title": "" }, { "docid": "6291f21727c70d3455a892a8edd3b18c", "text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.", "title": "" }, { "docid": "077287f3cdf841d7998c35ec13568645", "text": "We present an approach for blind image deblurring, which handles non-uniform blurs. Our algorithm has two main components: (i) A new method for recovering the unknown blur-field directly from the blurry image, and (ii) A method for deblurring the image given the recovered non-uniform blur-field. Our blur-field estimation is based on analyzing the spectral content of blurry image patches by Re-blurring them. Being unrestricted by any training data, it can handle a large variety of blur sizes, yielding superior blur-field estimation results compared to training-based deep-learning methods. Our non-uniform deblurring algorithm is based on the internal image-specific patch-recurrence prior. It attempts to recover a sharp image which, on one hand – results in the blurry image under our estimated blur-field, and on the other hand – maximizes the internal recurrence of patches within and across scales of the recovered sharp image. The combination of these two components gives rise to a blind-deblurring algorithm, which exceeds the performance of state-of-the-art CNN-based blind-deblurring by a significant margin, without the need for any training data.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "df00815ab7f96a286ca336ecd85ed821", "text": "In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K +K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms.", "title": "" }, { "docid": "fb6fabe03dd309e07e20d9b235384dc8", "text": "Unmanned Aircraft Systems (UAS) is an emerging technology with a tremendous potential to revolutionize warfare and to enable new civilian applications. It is integral part of future urban civil and military applications. It technologically matures enough to be integrated into civil society. The importance of UAS in scientific applications has been thoroughly demonstrated in recent years (DoD, 2010). Whatever missions are chosen for the UAS, their number and use will significantly increase in the future. UAS today play an increasing role in many public missions such as border surveillance, wildlife surveys, military training, weather monitoring, and local law enforcement. Challenges such as the lack of an on-board pilot to see and avoid other aircraft and the wide variation in unmanned aircraft missions and capabilities must be addressed in order to fully integrate UAS operations in the NAS in the Next Gen time frame. UAVs are better suited for dull, dirty, or dangerous missions than manned aircraft. UAS are mainly used for intelligence, surveillance and reconnaissance (ISR), border security, counter insurgency, attack and strike, target identification and designation, communications relay, electronic attack, law enforcement and security applications, environmental monitoring and agriculture, remote sensing, aerial mapping and meteorology. Although armed forces around the world continue to strongly invest in researching and developing technologies with the potential to advance the capabilities of UAS.", "title": "" }, { "docid": "36d0776ad44592db640bd205acee8e39", "text": "1. A review of the literature shows that in nearly all cases tropical rain forest fragmentation has led to a local loss of species. Isolated fragments suffer eductions in species richness with time after excision from continuous forest, and small fragments often have fewer species recorded for the same effort of observation than large fragments orareas of continuous forest. 2. Birds have been the most frequently studied taxonomic group with respect o the effects of tropical forest fragmentation. 3. The mechanisms of fragmentation-related extinction i clude the deleterious effects of human disturbance during and after deforestation, the reduction of population sizes, the reduction of immigration rates, forest edge effects, changes in community structure (secondand higher-order effects) and the immigration fexotic species. 4. The relative importance of these mechanisms remains obscure. 5. Animals that are large, sparsely or patchily distributed, orvery specialized and intolerant of the vegetation surrounding fragments, are particularly prone to local extinction. 6. The large number of indigenous pecies that are very sparsely distributed and intolerant of conditions outside the forest make evergreen tropical rain forest particularly susceptible to species loss through fragmentation. 7. Much more research is needed to study what is probably the major threat o global biodiversity.", "title": "" }, { "docid": "4591003089a1ccecd46fb1ac80ab3bb7", "text": "Pre-season rugby training develops the physical requisites for competition and consists of a high volume of resistance training and anaerobic and aerobic conditioning. However, the effects of a rugby union pre-season in professional athletes are currently unknown. Therefore, the purpose of this investigation was to determine the effects of a 4-week pre-season on 33 professional rugby union players. Bench press and box squat increased moderately (13.6 kg, 90% confidence limits +/-2.9 kg and 17.6 +/- 8.0 kg, respectively) over the training phase. Small decreases in bench throw (70.6 +/- 53.5 W), jump squat (280.1 +/- 232.4 W), and fat mass (1.4 +/- 0.4 kg) were observed. In addition, small increases were seen in fat-free mass (2.0 +/- 0.6 kg) and flexed upper-arm girth (0.6 +/- 0.2 cm), while moderate increases were observed in mid-thigh girth (1.9 +/- 0.5 cm) and perception of fatigue (0.6 +/- 0.4 units). Increases in strength and body composition were observed in elite rugby union players after 4 weeks of intensive pre-season training, but this may have been the result of a return to fitness levels prior to the off-season. Decreases in power may reflect high training volumes and increases in perceived of fatigue.", "title": "" }, { "docid": "dd38d76f208d26e681c00f63b50492e5", "text": "An anti-louse shampoo (Licener®) based on a neem seed extract was tested in vivo and in vitro on its efficacy to eliminate head louse infestation by a single treatment. The hair of 12 children being selected from a larger group due to their intense infestation with head lice were incubated for 10 min with the neem seed extract-containing shampoo. It was found that after this short exposition period, none of the lice had survived, when being observed for 22 h. In all cases, more than 50–70 dead lice had been combed down from each head after the shampoo had been washed out with normal tap water. A second group of eight children had been treated for 20 min with identical results. Intense combing of the volunteers 7 days after the treatment did not result in the finding of any motile louse neither in the 10-min treated group nor in the group the hair of which had been treated for 20 min. Other living head lice were in vitro incubated within the undiluted product (being placed inside little baskets the floor of which consisted of a fine net of gauze). It was seen that a total submersion for only 3 min prior to washing 3× for 2 min with tap water was sufficient to kill all motile stages (larvae and adults). The incubation of nits at 30°C into the undiluted product for 3, 10, and 20 min did not show differences. In all cases, there was no eyespot development or hatching larvae within 7–10 days of observation. This and the fact that the hair of treated children (even in the short-time treated group of only 10 min) did not reveal freshly hatched larval stages of lice indicate that there is an ovicidal activity of the product, too.", "title": "" }, { "docid": "adeb7bdbe9e903ae7041f93682b0a27c", "text": "Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.", "title": "" }, { "docid": "b0c694eb683c9afb41242298fdd4cf63", "text": "We have demonstrated 8.5-11.5 GHz class-E MMIC high-power amplifiers (HPAs) with a peak power-added-efficiency (PAE) of 61% and drain efficiency (DE) of 70% with an output power of 3.7 W in a continuous-mode operation. At 5 W output power, PAE and DE of 58% and 67% are measured, respectively, which implies MMIC power density of 5 W/mm at Vds = 30 V. The peak gain is 11 dB, with an associated gain of 9 dB at the peak PAE. At an output power of 9 W, DE and PAE of 59% and 51 % were measured, respectively. In order to improve the linearity, we have designed and simulated X-band class-E MMIC PAs similar to a Doherty configuration. The Doherty-based class-E amplifiers show an excellent cancellation of a third-order intermodulation product (IM3), which improved the simulated two-tone linearity C/IM3 to >; 50 dBc.", "title": "" }, { "docid": "032f444d4844c4fa9a3e948cbbc0818a", "text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.", "title": "" } ]
scidocsrr
d17f2cc0093908c1a716ab0b788169e8
RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement
[ { "docid": "df609125f353505fed31eee302ac1742", "text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "73a62915c29942d2fac0570cac7eb3e0", "text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "title": "" } ]
[ { "docid": "2fd7cc65c34551c90a72fc3cb4665336", "text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.", "title": "" }, { "docid": "ee865e3291eff95b5977b54c22b59f19", "text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.", "title": "" }, { "docid": "661d5db6f4a8a12b488d6f486ea5995e", "text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.", "title": "" }, { "docid": "cef79010b9772639d42351c960b68c83", "text": "In many real world elections, agents are not required to rank all candidates. We study three of the most common meth ods used to modify voting rules to deal with such partial votes. These methods modify scoring rules (like the Borda count), e limination style rules (like single transferable vote) and rule s based on the tournament graph (like Copeland) respectively. We argu e that with an elimination style voting rule like single transfera ble vote, partial voting does not change the situations where strateg ic voting is possible. However, with scoring rules and rules based on the tournament graph, partial voting can increase the situations wher e strategic voting is possible. As a consequence, the computational com plexity of computing a strategic vote can change. For example, with B orda count, the complexity of computing a strategic vote can decr ease or stay the same depending on how we score partial votes.", "title": "" }, { "docid": "8af777a64f8f2127552a05c8ea462416", "text": "This work addresses the issue of fire and smoke detection in a scene within a video surveillance framework. Detection of fire and smoke pixels is at first achieved by means of a motion detection algorithm. In addition, separation of smoke and fire pixels using colour information (within appropriate spaces, specifically chosen in order to enhance specific chromatic features) is performed. In parallel, a pixel selection based on the dynamics of the area is carried out in order to reduce false detection. The output of the three parallel algorithms are eventually fused by means of a MLP.", "title": "" }, { "docid": "fca58dee641af67f9bb62958b5b088f2", "text": "This work explores the possibility of mixing two different fingerprints, pertaining to two different fingers, at the image level in order to generate a new fingerprint. To mix two fingerprints, each fingerprint pattern is decomposed into two different components, viz., the continuous and spiral components. After prealigning the components of each fingerprint, the continuous component of one fingerprint is combined with the spiral component of the other fingerprint. Experiments on the West Virginia University (WVU) and FVC2002 datasets show that mixing fingerprints has several benefits: (a) it can be used to generate virtual identities from two different fingers; (b) it can be used to obscure the information present in an individual's fingerprint image prior to storing it in a central database; and (c) it can be used to generate a cancelable fingerprint template, i.e., the template can be reset if the mixed fingerprint is compromised.", "title": "" }, { "docid": "b14010454fe4b9f9712c13cbf9a5e23b", "text": "In this paper we propose an approach to Part of Speech (PoS) tagging using a combination of Hidden Markov Model and error driven learning. For the NLPAI joint task, we also implement a chunker using Conditional Random Fields (CRFs). The results for the PoS tagging and chunking task are separately reported along with the results of the joint task.", "title": "" }, { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" }, { "docid": "69d8d5b38456b30d3252d95cb43734cf", "text": "Article prepared for a revised edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), to be published by John Wiley, 1992. Final Draft; DO NOT REPRODUCE OR CIRCULATE. This copy is for review only. Please do not cite or copy. Prepared using troff, pic, eqn, tbl and bib under Unix 4.3 BSD.", "title": "" }, { "docid": "e61a0ba24db737d42a730d5738583ffa", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "533b8bf523a1fb69d67939607814dc9c", "text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.", "title": "" }, { "docid": "68b2608c91525f3147f74b41612a9064", "text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.", "title": "" }, { "docid": "dea52c761a9f4d174e9bd410f3f0fa38", "text": "Much computational work has been done on identifying and interpreting the meaning of metaphors, but little work has been done on understanding the motivation behind the use of metaphor. To computationally model discourse and social positioning in metaphor, we need a corpus annotated with metaphors relevant to speaker intentions. This paper reports a corpus study as a first step towards computational work on social and discourse functions of metaphor. We use Amazon Mechanical Turk (MTurk) to annotate data from three web discussion forums covering distinct domains. We then compare these to annotations from our own annotation scheme which distinguish levels of metaphor with the labels: nonliteral, conventionalized, and literal. Our hope is that this work raises questions about what new work needs to be done in order to address the question of how metaphors are used to achieve social goals in interaction.", "title": "" }, { "docid": "a03d0772d8c3e1fd5c954df2b93757e3", "text": "The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions.", "title": "" }, { "docid": "e0b8b4e916f5e4799ad2ab95d71b0b26", "text": "Automation plays a very important role in every field of human life. This paper contains the proposal of a fully automated menu ordering system in which the paper based menu is replaced by a user friendly Touchscreen based menu card. The system has PIC microcontroller which is interfaced with the input and output modules. The input module is the touchscreen sensor which is placed on GLCD (Graphical Liquid Crystal Display) to have a graphic image display, which takes the input from the user and provides the same information to the microcontroller. The output module is a Zigbee module which is used for communication between system at the table and system for receiving section. Microcontroller also displays the menu items on the GLCD. At the receiving end the selected items will be displayed on the LCD and by using the conveyer belt the received order will send to the particular table.", "title": "" }, { "docid": "257ffbc75578916dc89a703598ac0447", "text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.", "title": "" }, { "docid": "f3e9858900dd75c86d106856e63f1ab2", "text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.", "title": "" }, { "docid": "809392d489af5e1f8e85a9ad8a8ba9e0", "text": "Although a large number of ion channels are now believed to be regulated by phosphoinositides, particularly phosphoinositide 4,5-bisphosphate (PIP2), the mechanisms involved in phosphoinositide regulation are unclear. For the TRP superfamily of ion channels, the role and mechanism of PIP2 modulation has been especially difficult to resolve. Outstanding questions include: is PIP2 the endogenous regulatory lipid; does PIP2 potentiate all TRPs or are some TRPs inhibited by PIP2; where does PIP2 interact with TRP channels; and is the mechanism of modulation conserved among disparate subfamilies? We first addressed whether the PIP2 sensor resides within the primary sequence of the channel itself, or, as recently proposed, within an accessory integral membrane protein called Pirt. Here we show that Pirt does not alter the phosphoinositide sensitivity of TRPV1 in HEK-293 cells, that there is no FRET between TRPV1 and Pirt, and that dissociated dorsal root ganglion neurons from Pirt knock-out mice have an apparent affinity for PIP2 indistinguishable from that of their wild-type littermates. We followed by focusing on the role of the C terminus of TRPV1 in sensing PIP2. Here, we show that the distal C-terminal region is not required for PIP2 regulation, as PIP2 activation remains intact in channels in which the distal C-terminal has been truncated. Furthermore, we used a novel in vitro binding assay to demonstrate that the proximal C-terminal region of TRPV1 is sufficient for PIP2 binding. Together, our data suggest that the proximal C-terminal region of TRPV1 can interact directly with PIP2 and may play a key role in PIP2 regulation of the channel.", "title": "" }, { "docid": "b19e77ddb2c2ca5cc18bd8ba5425a698", "text": "In pharmaceutical formulations, phospholipids obtained from plant or animal sources and synthetic phospholipids are used. Natural phospholipids are purified from, e.g., soybeans or egg yolk using non-toxic solvent extraction and chromatographic procedures with low consumption of energy and minimum possible waste. Because of the use of validated purification procedures and sourcing of raw materials with consistent quality, the resulting products differing in phosphatidylcholine content possess an excellent batch to batch reproducibility with respect to phospholipid and fatty acid composition. The natural phospholipids are described in pharmacopeias and relevant regulatory guidance documentation of the Food and Drug Administration (FDA) and European Medicines Agency (EMA). Synthetic phospholipids with specific polar head group, fatty acid composition can be manufactured using various synthesis routes. Synthetic phospholipids with the natural stereochemical configuration are preferably synthesized from glycerophosphocholine (GPC), which is obtained from natural phospholipids, using acylation and enzyme catalyzed reactions. Synthetic phospholipids play compared to natural phospholipid (including hydrogenated phospholipids), as derived from the number of drug products containing synthetic phospholipids, a minor role. Only in a few pharmaceutical products synthetic phospholipids are used. Natural phospholipids are used in oral, dermal, and parenteral products including liposomes. Natural phospholipids instead of synthetic phospholipids should be selected as phospholipid excipients for formulation development, whenever possible, because natural phospholipids are derived from renewable sources and produced with more ecologically friendly processes and are available in larger scale at relatively low costs compared to synthetic phospholipids. Practical applications: For selection of phospholipid excipients for pharmaceutical formulations, natural phospholipids are preferred compared to synthetic phospholipids because they are available at large scale with reproducible quality at lower costs of goods. They are well accepted by regulatory authorities and are produced using less chemicals and solvents at higher yields. In order to avoid scale up problems during pharmaceutical development and production, natural phospholipid excipients instead of synthetic phospholipids should be selected whenever possible.", "title": "" }, { "docid": "d372c1fba12412dac5dc850baf3267b9", "text": "Smart grid is an intelligent power network featured by its two-way flows of electricity and information. With an integrated communication infrastructure, smart grid manages the operation of all connected components to provide reliable and sustainable electricity supplies. Many advanced communication technologies have been identified for their applications in different domains of smart grid networks. This paper focuses on wireless communication networking technologies for smart grid neighborhood area networks (NANs). In particular, we aim to offer a comprehensive survey to address various important issues on implementation of smart grid NANs, including network topology, gateway deployment, routing algorithms, and security. We will identify four major challenges for the implementation of NANs, including timeliness management, security assurance, compatibility design, and cognitive spectrum access, based on which the future research directions are suggested.", "title": "" } ]
scidocsrr
6f0a0c26eb5e6e645d04f6a23421dedc
VANet security challenges and solutions: A survey
[ { "docid": "a84143b7aa2d42f3297d81a036dc0f5e", "text": "Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.", "title": "" } ]
[ { "docid": "20b7dfaa400433b6697393d4e265d78d", "text": "Security Operation Centers (SOCs) are being operated by universities, government agencies, and corporations to defend their enterprise networks in general and in particular to identify malicious behaviors in both networks and hosts. The success of a SOC depends on having the right tools, processes and, most importantly, efficient and effective analysts. One of the worrying issues in recent times has been the consistently high burnout rates of security analysts in SOCs. Burnout results in analysts making poor judgments when analyzing security events as well as frequent personnel turnovers. In spite of high awareness of this problem, little has been known so far about the factors leading to burnout. Various coping strategies employed by SOC management such as career progression do not seem to address the problem but rather deal only with the symptoms. In short, burnout is a manifestation of one or more underlying issues in SOCs that are as of yet unknown. In this work we performed an anthropological study of a corporate SOC over a period of six months and identified concrete factors contributing to the burnout phenomenon. We use Grounded Theory to analyze our fieldwork data and propose a model that explains the burnout phenomenon. Our model indicates that burnout is a human capital management problem resulting from the cyclic interaction of a number of human, technical, and managerial factors. Specifically, we identified multiple vicious cycles connecting the factors affecting the morale of the analysts. In this paper we provide detailed descriptions of the various vicious cycles and suggest ways to turn these cycles into virtuous ones. We further validated our results on the fieldnotes from a SOC at a higher education institution. The proposed model is able to successfully capture and explain the burnout symptoms in this other SOC as well. Copyright is held by the author/owner. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee. Symposium on Usable Privacy and Security (SOUPS) 2015, July 22–24, 2015, Ottawa, Canada.", "title": "" }, { "docid": "4abceedb1f6c735a8bc91bc811ce4438", "text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.", "title": "" }, { "docid": "ba3e4fb74d1912e95d05a01cbf92e3c9", "text": "The Collaborative Filtering is the most successful algorithm in the recommender systems' field. A recommender system is an intelligent system can help users to come across interesting items. It uses data mining and information filtering techniques. The collaborative filtering creates suggestions for users based on their neighbors' preferences. But it suffers from its poor accuracy and scalability. This paper considers the users are m (m is the number of users) points in n dimensional space (n is the number of items) and represents an approach based on user clustering to produce a recommendation for active user by a new method. It uses k-means clustering algorithm to categorize users based on their interests. Then it uses a new method called voting algorithm to develop a recommendation. We evaluate the traditional collaborative filtering and the new one to compare them. Our results show the proposed algorithm is more accurate than the traditional one, besides it is less time consuming than it.", "title": "" }, { "docid": "c55c339eb53de3a385df7d831cb4f24b", "text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.", "title": "" }, { "docid": "bab606f99e64c7fd5ce3c04376fbd632", "text": "Diagnostic reasoning is a key component of many professions. To improve students’ diagnostic reasoning skills, educational psychologists analyse and give feedback on epistemic activities used by these students while diagnosing, in particular, hypothesis generation, evidence generation, evidence evaluation, and drawing conclusions. However, this manual analysis is highly time-consuming. We aim to enable the large-scale adoption of diagnostic reasoning analysis and feedback by automating the epistemic activity identification. We create the first corpus for this task, comprising diagnostic reasoning selfexplanations of students from two domains annotated with epistemic activities. Based on insights from the corpus creation and the task’s characteristics, we discuss three challenges for the automatic identification of epistemic activities using AI methods: the correct identification of epistemic activity spans, the reliable distinction of similar epistemic activities, and the detection of overlapping epistemic activities. We propose a separate performance metric for each challenge and thus provide an evaluation framework for future research. Indeed, our evaluation of various state-of-the-art recurrent neural network architectures reveals that current techniques fail to address some of these challenges.", "title": "" }, { "docid": "6f8cc4d648f223840ca67550f1a3b6dd", "text": "Information interaction system plays an important role in establishing a real-time and high-efficient traffic management platform in Intelligent Transportation System (ITS) applications. However, the present transmission technology still exists some defects in satisfying with the real-time performance of users data demand in Vehicle-to-Vehicle (V2V) communication. In order to solve this problem, this paper puts forward a novel Node Operating System (NDOS) scheme to realize the real-time data exchange between vehicles with wireless communication chips of mobile devices, and creates a distributed information interaction system for the interoperability between devices from various manufacturers. In addition, optimized data forwarding scheme is discussed for NDOS to achieve better transmission property and channel resource utilization. Experiments have been carried out in Network Simulator 2 (NS2) evaluation environment, and the results suggest that the scheme can receive higher transmission efficiency and validity than existing communication skills.", "title": "" }, { "docid": "349f85e6ffd66d6a1dd9d9c6925d00bc", "text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.", "title": "" }, { "docid": "7395053055da53b32adf2b28dba6de2d", "text": "The Discovery of Human Herpesvirus 6 (HHV-6) Initially designated HBLV, for human B-lymphotropic virus, HHV-6 was isolated fortuitously in 1986 from interleukin 2stimulated peripheral blood mononuclear cells (PBMCs) of patients with AIDS or lymphoproliferative disorders (1). The PBMC cultures exhibited an unusual cytopathic effect characterized by enlarged balloonlike cells. The causative agent was identified as a herpesvirus by electron microscopy and lack of crosshybridization to a number of human herpesviruses (2). The GS strain is the prototype of the first isolates. Two additional isolates of lymphotropic human herpesviruses, U1102 and Gambian, genetically similar to HBLV, were obtained 1 year later from PBMCs of African AIDS patients. All of the isolates could grow in T cells (CEM, H9, Jurkat), in monocytes (HL60, U937), in glial cells (HED), as well as in B-cell lines (Raji, RAMOS, L4, WHPT) (3,4). A new variant, Z29, subsequently shown to differ in restriction endonuclease pattern from GS-like strains, was isolated from PBMCs of patients with AIDS (5). The cells supporting virus growth were characterized as CD4+ T lymphocytes (6). The designation HHV-6 was proposed 1 year after discovery of the first isolate to comply with the rules established by the International Committee on Taxonomy of Viruses (7). More than 100 additional HHV-6 strains have been isolated from PBMCs of children with subitum or febrile syndromes (8), from cell-free saliva of healthy or HIV-infected patients (9,10), from PBMCs of patients with chronic fatigue syndrome (CFS) (11), and from PBMCs of healthy adults—these PBMCs were cultivated for human herpesvirus 7 (HHV-7) isolation (12).", "title": "" }, { "docid": "01d441a277e9f9cbf6af40d0d526d44f", "text": "On-orbit fabrication of spacecraft components can enable space programs to escape the volumetric limitations of launch shrouds and create systems with extremely large apertures and very long baselines in order to deliver higher resolution, higher bandwidth, and higher SNR data. This paper will present results of efforts to investigated the value proposition and technical feasibility of adapting several of the many rapidly-evolving additive manufacturing and robotics technologies to the purpose of enabling space systems to fabricate and integrate significant parts of themselves on-orbit. We will first discuss several case studies for the value proposition for on-orbit fabrication of space structures, including one for a starshade designed to enhance the capabilities for optical imaging of exoplanets by the proposed New World Observer mission, and a second for a long-baseline phased array radar system. We will then summarize recent work adapting and evolving additive manufacturing techniques and robotic assembly technologies to enable automated on-orbit fabrication of large, complex, three-dimensional structures such as trusses, antenna reflectors, and shrouds.", "title": "" }, { "docid": "7f47434e413230faf04849cf43a845fa", "text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.", "title": "" }, { "docid": "9cebd0ff0e218d742e44ebe05fb2e394", "text": "Studies supporting the notion that physical activity and exercise can help alleviate the negative impact of age on the body and the mind abound. This literature review provides an overview of important findings in this fast growing research domain. Results from cross-sectional, longitudinal, and intervention studies with healthy older adults, frail patients, and persons suffering from mild cognitive impairment and dementia are reviewed and discussed. Together these finding suggest that physical exercise is a promising nonpharmaceutical intervention to prevent age-related cognitive decline and neurodegenerative diseases.", "title": "" }, { "docid": "0965f1390233e71da72fbc8f37394add", "text": "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.", "title": "" }, { "docid": "3c98c5bd1d9a6916ce5f6257b16c8701", "text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.", "title": "" }, { "docid": "1f26cc778ae481c8c72413f721926e57", "text": "As improved versions of the successive cancellation (SC) decoding algorithm, the successive cancellation list (SCL) decoding and the successive cancellation stack (SCS) decoding are used to improve the finite-length performance of polar codes. In this paper, unified descriptions of the SC, SCL, and SCS decoding algorithms are given as path search procedures on the code tree of polar codes. Combining the principles of SCL and SCS, a new decoding algorithm called the successive cancellation hybrid (SCH) is proposed. This proposed algorithm can provide a flexible configuration when the time and space complexities are limited. Furthermore, a pruning technique is also proposed to lower the complexity by reducing unnecessary path searching operations. Performance and complexity analysis based on simulations shows that under proper configurations, all the three improved successive cancellation (ISC) decoding algorithms can approach the performance of the maximum likelihood (ML) decoding but with acceptable complexity. With the help of the proposed pruning technique, the time and space complexities of ISC decoders can be significantly reduced and be made very close to those of the SC decoder in the high signal-to-noise ratio regime.", "title": "" }, { "docid": "7c9be363cf760d03aab0b6bffd764676", "text": "Many children and youth in rural communities spend significant portions of their lives on school buses. This paper reviews the limited empirical research on the school bus experience, presents some new exploratory data, and offers some suggestions for future research on the impact of riding the school bus on children and youth.", "title": "" }, { "docid": "082b1c341435ce93cfab869475ed32bd", "text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory", "title": "" }, { "docid": "cd9632f63fc5e3acf0ebb1039048f671", "text": "The authors completed an 8-week practice placement at Thrive’s garden project in Battersea Park, London, as part of their occupational therapy degree programme. Thrive is a UK charity using social and therapeutic horticulture (STH) to enable disabled people to make positive changes to their own lives (Thrive 2008). STH is an emerging therapeutic movement, using horticulture-related activities to promote the health and wellbeing of disabled and vulnerable people (Sempik et al 2005, Fieldhouse and Sempik 2007). Within Battersea Park, Thrive has a main garden with available indoor facilities and two satellite gardens. All these gardens are publicly accessible. Thrive Battersea’s service users include people with learning disabilities, mental health challenges and physical disabilities. Thrive’s group facilitators (referred to as therapists) lead regular gardening groups, aiming to enable individual performance within the group and being mindful of health conditions and circumstances. The groups have three types of participant: Thrive’s therapists, service users (known as gardeners) and volunteers. The volunteers help Thrive’s therapists and gardeners to perform STH activities. The gardening groups comprise participants from various age groups and abilities. Thrive Battersea provides ongoing contact between the gardeners, volunteers and therapists. Integrating service users and non-service users is a method of tackling negative attitudes to disability and also promoting social inclusion (Sayce 2000). Thrive Battersea is an example of a ‘role-emerging’ practice placement, which is based outside either local authorities or the National Health Service (NHS) and does not have an on-site occupational therapist (College of Occupational Therapists 2006). The connection of occupational therapy theory to practice is essential on any placement (Alsop 2006). The roleemerging nature of this placement placed additional reflective onus on the authors to identify the links between theory and practice. The authors observed how Thrive’s gardeners connected to the spaces they worked and to the people they worked with. A sense of individual Gardening and belonging: reflections on how social and therapeutic horticulture may facilitate health, wellbeing and inclusion", "title": "" }, { "docid": "35404fbbf92e7a995cdd6de044f2ec0d", "text": "The ball on plate system is the extension of traditional ball on beam balancing problem in control theory. In this paper the implementation of a proportional-integral-derivative controller (PID controller) to balance a ball on a plate has been demonstrated. To increase the system response time and accuracy multiple controllers are piped through a simple custom serial protocol to boost the processing power, and overall performance. A single HD camera module is used as a sensor to detect the ball's position and two RC servo motors are used to tilt the plate to balance the ball. The result shows that by implementing multiple PUs (Processing Units) redundancy and high resolution can be achieved in real-time control systems.", "title": "" } ]
scidocsrr
244f19e37a8cdaeba09b9581f772e37d
Workload Management in Dynamic IT Service Delivery Organizations
[ { "docid": "254a84aae5d06ae652996535027e282c", "text": "Change management is a process by which IT systems are modified to accommodate considerations such as software fixes, hardware upgrades and performance enhancements. This paper discusses the CHAMPS system, a prototype under development at IBM Research for Change Management with Planning and Scheduling. The CHAMPS system is able to achieve a very high degree of parallelism for a set of tasks by exploiting detailed factual knowledge about the structure of a distributed system from dependency information at runtime. In contrast, today's systems expect an administrator to provide such insights, which is often not the case. Furthermore, the optimization techniques we employ allow the CHAMPS system to come up with a very high quality solution for a mathematically intractable problem in a time which scales nicely with the problem size. We have implemented the CHAMPS system and have applied it in a TPC-W environment that implements an on-line book store application.", "title": "" }, { "docid": "b45f832faf2816d456afa25a3641ffe9", "text": "This book is about feedback control of computing systems. The main idea of feedback control is to use measurements of a system’s outputs, such as response times, throughputs, and utilizations, to achieve externally specified goals. This is done by adjusting the system control inputs, such as parameters that affect buffer sizes, scheduling policies, and concurrency levels. Since the measured outputs are used to determine the control inputs, and the inputs then affect the outputs, the architecture is called feedback or closed loop. Almost any system that is considered automatic has some element of feedback control. In this book we focus on the closed-loop control of computing systems and methods for their analysis and design.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "376f28143deecc7b95fe45d54dd16bb6", "text": "We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule malignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machinelearning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05f3d2097efffb3e1adcbede16ec41d2", "text": "BACKGROUND\nDialysis patients with uraemic pruritus (UP) have significantly impaired quality of life. To assess the therapeutic effect of UP treatments, a well-validated comprehensive and multidimensional instrument needed to be established.\n\n\nOBJECTIVES\nTo develop and validate a multidimensional scale assessing UP in patients on dialysis: the Uraemic Pruritus in Dialysis Patients (UP-Dial).\n\n\nMETHODS\nThe development and validation of the UP-Dial instrument were conducted in four phases: (i) item generation, (ii) development of a pilot questionnaire, (iii) refinement of the questionnaire with patient recruitment and (iv) psychometric validation. Participants completed the UP-Dial, the visual analogue scale (VAS) of UP, the Dermatology Life Quality Index (DLQI), the Kidney Disease Quality of Life-36 (KDQOL-36), the Pittsburgh Sleep Quality Index (PSQI) and the Beck Depression Inventory (BDI) between 15 May 2012 and 30 November 2015.\n\n\nRESULTS\nThe 27-item pilot UP-Dial was generated, with 168 participants completing the pilot scale. After factor analysis was performed, the final 14-item UP-Dial encompassed three domains: signs and symptoms, psychosocial, and sleep. Face and content validity were satisfied through the item generation process and expert review. Psychometric analysis demonstrated that the UP-Dial had good convergent and discriminant validity. The UP-Dial was significantly correlated [Spearman rank coefficient, 95% confidence interval (CI)] with the VAS-UP (0·76, 0·69-0·83), DLQI (0·78, 0·71-0·85), KDQOL-36 (-0·86, -0·91 to -0·81), PSQI (0·85, 0·80-0·89) and BDI (0·70, 0·61-0·79). The UP-Dial revealed excellent internal consistency (Cronbach's α 0·90, 95% CI 0·87-0·92) and reproducibility (intraclass correlation 0·95, 95% CI 0·90-0·98).\n\n\nCONCLUSIONS\nThe UP-Dial is valid and reliable for assessing UP among patients on dialysis. Future research should focus on the cross-cultural adaptation and translation of the scale to other languages.", "title": "" }, { "docid": "305efd1823009fe79c9f8ff52ddb5724", "text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.", "title": "" }, { "docid": "1fc965670f71d9870a4eea93d129e285", "text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "e6d309d24e7773d7fc78c3ebeb926ba0", "text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.", "title": "" }, { "docid": "712a4bdb5b285f3ef52218096ec3a4bf", "text": "We describe the relations between active maintenance of the hand at various positions in a two-dimensional space and the frequency of single cell discharge in motor cortex (n = 185) and area 5 (n = 128) of the rhesus monkey. The steady-state discharge rate of 124/185 (67%) motor cortical and 105/128 (82%) area 5 cells varied with the position in which the hand was held in space (“static spatial effect”). The higher prevalence of this effect in area 5 was statistically significant. In both structures, static effects were observed at similar frequencies for cells that possessed as well as for those that lacked passive driving from the limb. The results obtained by a quantitative analysis were similar for neurons of the two cortical areas studied. It was found that of the neurons with a static effect, the steady-state discharge rate of 78/124 (63%) motor cortical and 63/105 (60%) area 5 cells was a linear function of the position of the hand across the two-dimensional space, so that the neuronal “response surface” was adequately described by a plane (R2 ≥ 0.7, p < 0.05, F-test in analysis of variance). The preferred orientations of these response planes differed for different cells. These results indicate that individual cells in these areas do not relate uniquely a particular position of the hand in space. Instead, they seem to encode spatial gradients at certain orientations. A unique relation to position in space could be signalled by the whole population of these neurons, considered as an ensemble. This remains to be elucidated. Finally, the similarity of the quantitative relations observed in motor cortex and area 5 suggests that these structures may process spatial information in a similar way.", "title": "" }, { "docid": "7c19a963cd3ad7119278744e73c1c27a", "text": "This work presents a study of three important issues of the color pixel classification approach to skin segmentation: color representation, color quantization, and classification algorithm. Our analysis of several representative color spaces using the Bayesian classifier with the histogram technique shows that skin segmentation based on color pixel classification is largely unaffected by the choice of the color space. However, segmentation performance degrades when only chrominance channels are used in classification. Furthermore, we find that color quantization can be as low as 64 bins per channel, although higher histogram sizes give better segmentation performance. The Bayesian classifier with the histogram technique and the multilayer perceptron classifier are found to perform better compared to other tested classifiers, including three piecewise linear classifiers, three unimodal Gaussian classifiers, and a Gaussian mixture classifier.", "title": "" }, { "docid": "cdcbbe1e40a36974ac333912940718a7", "text": "Plant growth promoting rhizobacteria (PGPR) are beneficial bacteria which have the ability to colonize the roots and either promote plant growth through direct action or via biological control of plant diseases (Kloepper and Schroth 1978). They are associated with many plant species and are commonly present in varied environments. Strains with PGPR activity, belonging to genera Azoarcus, Azospirillum, Azotobacter, Arthrobacter, Bacillus, Clostridium, Enterobacter, Gluconacetobacter, Pseudomonas, and Serratia, have been reported (Hurek and Reinhold-Hurek 2003). Among these, species of Pseudomonas and Bacillus are the most extensively studied. These bacteria competitively colonize the roots of plant and can act as biofertilizers and/or antagonists (biopesticides) or simultaneously both. Diversified populations of aerobic endospore forming bacteria (AEFB), viz., species of Bacillus, occur in agricultural fields and contribute to crop productivity directly or indirectly. Physiological traits, such as multilayered cell wall, stress resistant endospore formation, and secretion of peptide antibiotics, peptide signal molecules, and extracellular enzymes, are ubiquitous to these bacilli and contribute to their survival under adverse environmental conditions for extended periods of time. Multiple species of Bacillus and Paenibacillus are known to promote plant growth. The principal mechanisms of growth promotion include production of growth stimulating phytohormones, solubilization and mobilization of phosphate, siderophore production, antibiosis, i.e., production of antibiotics, inhibition of plant ethylene synthesis, and induction of plant systemic resistance to pathogens (Richardson et al. 2009; Idris et al. 2007; Gutierrez-Manero et al. 2001;", "title": "" }, { "docid": "d51a844fa1ec4a63868611d73c6acfad", "text": "Massive open online courses (MOOCs) attract a large number of student registrations, but recent studies have shown that only a small fraction of these students complete their courses. Student dropouts are thus a major deterrent for the growth and success of MOOCs. We believe that understanding student engagement as a course progresses is essential for minimizing dropout rates. Formally defining student engagement in an online setting is challenging. In this paper, we leverage activity (such as posting in discussion forums, timely submission of assignments, etc.), linguistic features from forum content and structural features from forum interaction to identify two different forms of student engagement (passive and active) in MOOCs. We use probabilistic soft logic (PSL) to model student engagement by capturing domain knowledge about student interactions and performance. We test our models on MOOC data from Coursera and demonstrate that modeling engagement is helpful in predicting student performance.", "title": "" }, { "docid": "bc05c9cafade197494b52cf3f2ff091b", "text": "Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.", "title": "" }, { "docid": "1baaed4083a1a8315f8d5cd73730c81e", "text": "While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, it is naturally desirable to tightly integrate deep learning and Bayesian models within a principled probabilistic framework, which we call Bayesian deep learning. In this unified framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in return, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a general introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, and control. In this survey, we also discuss the relationship and differences between Bayesian deep learning and other related topics like Bayesian treatment of neural networks.", "title": "" }, { "docid": "85f5833628a4b50084fa50cbe45ebe4d", "text": "We introduce a functional gradient descent trajectory optimization algorithm for robot motion planning in Reproducing Kernel Hilbert Spaces (RKHSs). Functional gradient algorithms are a popular choice for motion planning in complex many-degree-of-freedom robots, since they (in theory) work by directly optimizing within a space of continuous trajectories to avoid obstacles while maintaining geometric properties such as smoothness. However, in practice, implementations such as CHOMP and TrajOpt typically commit to a fixed, finite parametrization of trajectories, often as a sequence of waypoints. Such a parameterization can lose much of the benefit of reasoning in a continuous trajectory space: e.g., it can require taking an inconveniently small step size and large number of iterations to maintain smoothness. Our work generalizes functional gradient trajectory optimization by formulating it as minimization of a cost functional in an RKHS. This generalization lets us represent trajectories as linear combinations of kernel functions. As a result, we are able to take larger steps and achieve a locally optimal trajectory in just a few iterations. Depending on the selection of kernel, we can directly optimize in spaces of trajectories that are inherently smooth in velocity, jerk, curvature, etc., and that have a low-dimensional, adaptively chosen parameterization. Our experiments illustrate the effectiveness of the planner for different kernels, including Gaussian RBFs with independent and coupled interactions among robot joints, Laplacian RBFs, and B-splines, as compared to the standard discretized waypoint representation.", "title": "" }, { "docid": "fda37e6103f816d4933a3a9c7dee3089", "text": "This paper introduces a novel approach to estimate the systolic and diastolic blood pressure ratios (SBPR and DBPR) based on the maximum amplitude algorithm (MAA) using a Gaussian mixture regression (GMR). The relevant features, which clearly discriminate the SBPR and DBPR according to the targeted groups, are selected in a feature vector. The selected feature vector is then represented by the Gaussian mixture model. The SBPR and DBPR are subsequently obtained with the help of the GMR and then mapped back to SBP and DBP values that are more accurate than those obtained with the conventional MAA method.", "title": "" }, { "docid": "2ee1f7a56eba17b75217cca609452f20", "text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.", "title": "" }, { "docid": "5fc9fe7bcc50aad948ebb32aefdb2689", "text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.", "title": "" }, { "docid": "ec5aac01866a1e4ca3f4e906990d5d8e", "text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.", "title": "" }, { "docid": "960022742172d6d0e883a23c74d800ef", "text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.", "title": "" }, { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" } ]
scidocsrr
ec9c10e81b972a103b15041f17c2c8e9
Individual Tree Delineation in Windbreaks Using Airborne-Laser-Scanning Data and Unmanned Aerial Vehicle Stereo Images
[ { "docid": "a0c37bb6608f51f7095d6e5392f3c2f9", "text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589", "title": "" } ]
[ { "docid": "24d77eb4ea6ecaa44e652216866ab8c8", "text": "In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.", "title": "" }, { "docid": "faf25bfda6d078195b15f5a36a32673a", "text": "In high performance VLSI circuits, the power consumption is mainly related to signal transition, charging and discharging of parasitic capacitance in transistor during switching activity. Adiabatic switching is a reversible logic to conserve energy instead of dissipating power reuses it. In this paper, low power multipliers and compressor are designed using adiabatic logic. Compressors are the basic components in many applications like partial product summation in multipliers. The Vedic multiplier is designed using the compressor and the power result is analysed. The designs are implemented and the power results are obtained using TANNER EDA 12.0 tool. This paper presents a novel scheme for analysis of low power multipliers using adiabatic logic in inverter and in the compressor. The scheme is optimized for low power as well as high speed implementation over reported scheme. K e y w o r d s : A d i a b a t i c l o g i c , C o m p r e s s o r , M u l t i p l i e r s .", "title": "" }, { "docid": "adf69030a68ed3bf6fc4d008c50ac5b5", "text": "Many patients with low back and/or pelvic girdle pain feel relief after application of a pelvic belt. External compression might unload painful ligaments and joints, but the exact mechanical effect on pelvic structures, especially in (active) upright position, is still unknown. In the present study, a static three-dimensional (3-D) pelvic model was used to simulate compression at the level of anterior superior iliac spine and the greater trochanter. The model optimised forces in 100 muscles, 8 ligaments and 8 joints in upright trunk, pelvis and upper legs using a criterion of minimising maximum muscle stress. Initially, abdominal muscles, sacrotuberal ligaments and vertical sacroiliac joints (SIJ) shear forces mainly balanced a trunk weight of 500N in upright position. Application of 50N medial compression force at the anterior superior iliac spine (equivalent to 25N belt tension force) deactivated some dorsal hip muscles and reduced the maximum muscle stress by 37%. Increasing the compression up to 100N reduced the vertical SIJ shear force by 10% and increased SIJ compression force with 52%. Shifting the medial compression force of 100N in steps of 10N to the greater trochanter did not change the muscle activation pattern but further increased SIJ compression force by 40% compared to coxal compression. Moreover, the passive ligament forces were distributed over the sacrotuberal, the sacrospinal and the posterior ligaments. The findings support the cause-related designing of new pelvic belts to unload painful pelvic ligaments or muscles in upright posture.", "title": "" }, { "docid": "d3ec3eeb5e56bdf862f12fe0d9ffe71c", "text": "This paper will communicate preliminary findings from applied research exploring how to ensure that serious games are cost effective and engaging components of future training solutions. The applied research is part of a multimillion pound program for the Department of Trade and Industry, and involves a partnership between UK industry and academia to determine how bespoke serious games should be used to best satisfy learning needs in a range of contexts. The main objective of this project is to produce a minimum of three serious games prototypes for clients from different sectors (e.g., military, medical and business) each prototype addressing a learning need or learning outcome that helps solve a priority business problem or fulfill a specific training need. This paper will describe a development process that aims to encompass learner specifics and targeted learning outcomes in order to ensure that the serious game is successful. A framework for describing game-based learning scenarios is introduced, and an approach to the analysis that effectively profiles the learner within the learner group with respect to game-based learning is outlined. The proposed solution also takes account of relevant findings from serious games research on particular learner groups that might support the selection and specification of a game. A case study on infection control will be used to show how this approach to the analysis is being applied for a healthcare issue.", "title": "" }, { "docid": "9e5cd32f56abf7ff9d98847970394236", "text": "This paper presents the results of a detailed study of the singular configurations of 3planar parallel mechanisms with three identical legs. Only prismatic and revolute jo are considered. From the point of view of singularity analysis, there are ten diffe architectures. All of them are examined in a compact and systematic manner using p screw theory. The nature of each possible singular configuration is discussed an singularity loci for a constant orientation of the mobile platform are obtained. For so architectures, simplified designs with easy to determine singularities are identified. @DOI: 10.1115/1.1582878 #", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "8d8e7c9777f02c6a4a131f21a66ee870", "text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588", "title": "" }, { "docid": "68470cd075d9c475b5ff93578ff7e86d", "text": "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is being able to recognize feelings in the conversation partner and reply accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of large-scale publicly available datasets both for emotion and relevant dialogues. This work proposes a new task for empathetic dialogue generation and EMPATHETICDIALOGUES, a dataset of 25k conversations grounded in emotional contexts to facilitate training and evaluating dialogue systems. Our experiments indicate that models explicitly leveraging emotion predictions from previous utterances are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores).", "title": "" }, { "docid": "0b50ec58f82b7ac4ad50eb90425b3aea", "text": "OBJECTIVES\nThe study aimed (1) to examine if there are equivalent results in terms of union, alignment and elbow functionally comparing single- to dual-column plating of AO/OTA 13A2 and A3 distal humeral fractures and (2) if there are more implant-related complications in patients managed with bicolumnar plating compared to single-column plate fixation.\n\n\nDESIGN\nThis was a multi-centred retrospective comparative study.\n\n\nSETTING\nThe study was conducted at two academic level 1 trauma centres.\n\n\nPATIENTS/PARTICIPANTS\nA total of 105 patients were identified to have surgical management of extra-articular distal humeral fractures Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 13A2 and AO/OTA 13A3).\n\n\nINTERVENTION\nPatients were treated with traditional dual-column plating or a single-column posterolateral small-fragment pre-contoured locking plate used as a neutralisation device with at least five screws in the short distal segment.\n\n\nMAIN OUTCOME MEASUREMENTS\nThe patients' elbow functionality was assessed in terms of range of motion, union and alignment. In addition, the rate of complications between the groups including radial nerve palsy, implant-related complications (painful prominence and/or ulnar nerve neuritis) and elbow stiffness were compared.\n\n\nRESULTS\nPatients treated with single-column plating had similar union rates and alignment. However, single-column plating resulted in a significantly better range of motion with less complications.\n\n\nCONCLUSIONS\nThe current study suggests that exposure/instrumentation of only the lateral column is a reliable and preferred technique. This technique allows for comparable union rates and alignment with increased elbow functionality and decreased number of complications.", "title": "" }, { "docid": "0db28b5ec56259c8f92f6cc04d4c2601", "text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9da6883a9fe700aeb84208efbf0a56a3", "text": "With the increasing demand for more energy efficient buildings, the construction industry is faced with the challenge to ensure that the energy efficiency predicted during the design is realised once a building is in use. There is, however, significant evidence to suggest that buildings are not performing as well as expected and initiatives such as PROBE and CarbonBuzz aim to illustrate the extent of this so called „Performance Gap‟. This paper discusses the underlying causes of discrepancies between detailed energy modelling predictions and in-use performance of occupied buildings (after the twelve month liability period). Many of the causal factors relate to the use of unrealistic input parameters regarding occupancy behaviour and facilities management in building energy models. In turn, this is associated with the lack of feedback to designers once a building has been constructed and occupied. This paper aims to demonstrate how knowledge acquired from Post-Occupancy Evaluation (POE) can be used to produce more accurate energy performance models. A case study focused specifically on lighting, small power and catering equipment in a high density office building is presented. Results show that by combining monitored data with predictive energy modelling, it was possible to increase the accuracy of the model to within 3% of actual electricity consumption values. Future work will seek to use detailed POE data to develop a set of evidence based benchmarks for energy consumption in office buildings. It is envisioned that these benchmarks will inform designers on the impact of occupancy and management on the actual energy consumption of buildings. Moreover, it should enable the use of more realistic input parameters in energy models, bringing the predicted figures closer to reality.", "title": "" }, { "docid": "ced98c32f887001d40e783ab7b294e1a", "text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.", "title": "" }, { "docid": "f1fe8a9d2e4886f040b494d76bc4bb78", "text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.", "title": "" }, { "docid": "02d518721f8ab3c4b2abb854c9111267", "text": "BACKGROUND\nDue to the excessive and pathologic effects of depression and anxiety, it is important to identify the role of protective factors, such as effective coping and social support. This study examined the associations between perceived social support and coping styles with depression and anxiety levels.\n\n\nMATERIALS AND METHODS\nThis cross sectional study was part of the Study on the Epidemiology of Psychological, Alimentary Health and Nutrition project. A total 4658 individuals aged ≥20 years was selected by cluster random sampling. Subjects completed questionnaires, which were used to describe perceived social support, coping styles, depression and anxiety. t-test, Chi-square test, pearson's correlation and Logistic regression analysis were used in data analyses.\n\n\nRESULTS\nThe results of Logistic regression analysis showed after adjusting demographic characteristics for odd ratio of anxiety, active copings such as positive re-interpretation and growth with odds ratios; 95% confidence interval: 0.82 (0.76, 0.89), problem engagement (0.92 [0.87, 0.97]), acceptance (0.82 [0.74, 0.92]) and also among perceived social supports, family (0.77 [0.71, 0.84]) and others (0.84 [0.76, 0.91]) were protective. In addition to, for odd ratio of depression, active copings such as positive re-interpretation and growth (0.74 [0.69, 0.79]), problem engagement (0.89 [0.86, 0.93]), and support seeking (0.96 [0.93, 0.99]) and all of social support types (family [0.75 (0.70, 0.80)], friends [0.90 (0.85, 0.95)] and others [0.80 (0.75, 0.86)]) were protective. Avoidance was risk factor for both of anxiety (1.19 [1.12, 1.27]) and depression (1.22 [1.16, 1.29]).\n\n\nCONCLUSION\nThis study shows active coping styles and perceived social supports particularly positive re-interpretation and family social support are protective factors for depression and anxiety.", "title": "" }, { "docid": "eb8087d0f30945d45a0deb02b7f7bb53", "text": "The use of teams, especially virtual teams, is growing significantly in corporations, branches of the government and nonprofit organizations. However, despite this prevalence, little is understood in terms of how to best train these teams for optimal performance. Team training is commonly cited as a factor for increasing team performance, yet, team training is often applied in a haphazard and brash manner, if it is even applied at all. Therefore, this paper attempts to identify the flow of a training model for virtual teams. Rooted in transactive memory systems, this theoretical model combines the science of encoding, storing and retrieving information with the science of team training.", "title": "" }, { "docid": "210e9bc5f2312ca49438e6209ecac62e", "text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.", "title": "" }, { "docid": "19ea89fc23e7c4d564e4a164cfc4947a", "text": "OBJECTIVES\nThe purpose of this study was to evaluate the proximity of the mandibular molar apex to the buccal bone surface in order to provide anatomic information for apical surgery.\n\n\nMATERIALS AND METHODS\nCone-beam computed tomography (CBCT) images of 127 mandibular first molars and 153 mandibular second molars were analyzed from 160 patients' records. The distance was measured from the buccal bone surface to the root apex and the apical 3.0 mm on the cross-sectional view of CBCT.\n\n\nRESULTS\nThe second molar apex and apical 3 mm were located significantly deeper relative to the buccal bone surface compared with the first molar (p < 0.01). For the mandibular second molars, the distance from the buccal bone surface to the root apex was significantly shorter in patients over 70 years of age (p < 0.05). Furthermore, this distance was significantly shorter when the first molar was missing compared to nonmissing cases (p < 0.05). For the mandibular first molars, the distance to the distal root apex of one distal-rooted tooth was significantly greater than the distance to the disto-buccal root apex (p < 0.01). In mandibular second molar, the distance to the apex of C-shaped roots was significantly greater than the distance to the mesial root apex of non-C-shaped roots (p < 0.01).\n\n\nCONCLUSIONS\nFor apical surgery in mandibular molars, the distance from the buccal bone surface to the apex and apical 3 mm is significantly affected by the location, patient age, an adjacent missing anterior tooth, and root configuration.", "title": "" }, { "docid": "941df83e65700bc2e5ee7226b96e4f54", "text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.", "title": "" }, { "docid": "91f36db08fdc766d5dc86007dc7a02ad", "text": "In the last few years communication technology has been improved, which increase the need of secure data communication. For this, many researchers have exerted much of their time and efforts in an attempt to find suitable ways for data hiding. There is a technique used for hiding the important information imperceptibly, which is Steganography. Steganography is the art of hiding information in such a way that prevents the detection of hidden messages. The process of using steganography in conjunction with cryptography, called as Dual Steganography. This paper tries to elucidate the basic concepts of steganography, its various types and techniques, and dual steganography. There is also some of research works done in steganography field in past few years.", "title": "" } ]
scidocsrr
b9e98124971c2fd8d827fdfa00b51993
Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web
[ { "docid": "c439a5c8405d8ba7f831a5ac4b1576a7", "text": "1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments", "title": "" } ]
[ { "docid": "790d30535edadb8e6318b6907b8553f3", "text": "Learning to anticipate future events on the basis of past experience with the consequences of one's own behavior (operant conditioning) is a simple form of learning that humans share with most other animals, including invertebrates. Three model organisms have recently made significant contributions towards a mechanistic model of operant conditioning, because of their special technical advantages. Research using the fruit fly Drosophila melanogaster implicated the ignorant gene in operant conditioning in the heat-box, research on the sea slug Aplysia californica contributed a cellular mechanism of behavior selection at a convergence point of operant behavior and reward, and research on the pond snail Lymnaea stagnalis elucidated the role of a behavior-initiating neuron in operant conditioning. These insights demonstrate the usefulness of a variety of invertebrate model systems to complement and stimulate research in vertebrates.", "title": "" }, { "docid": "581ec70f1a056cb344825e66ad203c69", "text": "A new approach to achieve coalescence and sintering of metallic nanoparticles at room temperature is presented. It was discovered that silver nanoparticles behave as soft particles when they come into contact with oppositely charged polyelectrolytes and undergo a spontaneous coalescence process, even without heating. Utilizing this finding in printing conductive patterns, which are composed of silver nanoparticles, enables achieving high conductivities even at room temperature. Due to the sintering of nanoparticles at room temperature, the formation of conductive patterns on plastic substrates and even on paper is made possible. The resulting high conductivity, 20% of that for bulk silver, enabled fabrication of various devices as demonstrated by inkjet printing of a plastic electroluminescent device.", "title": "" }, { "docid": "ab148ea69cf884b2653823b350ed5cfc", "text": "The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions.", "title": "" }, { "docid": "287572e1c394ec6959853f62b7707233", "text": "This paper presents a method for state estimation on a ballbot; i.e., a robot balancing on a single sphere. Within the framework of an extended Kalman filter and by utilizing a complete kinematic model of the robot, sensory information from different sources is combined and fused to obtain accurate estimates of the robot's attitude, velocity, and position. This information is to be used for state feedback control of the dynamically unstable system. Three incremental encoders (attached to the omniwheels that drive the ball of the robot) as well as three rate gyroscopes and accelerometers (attached to the robot's main body) are used as sensors. For the presented method, observability is proven analytically for all essential states in the system, and the algorithm is experimentally evaluated on the Ballbot Rezero.", "title": "" }, { "docid": "ff81d8b7bdc5abbd9ada376881722c02", "text": "Along with the progress of miniaturization and energy saving technologies of sensors, biological information in our daily life can be monitored by installing the sensors to a lavatory bowl. Lavatory is usually shared among several people, therefore biological information need to be identified. Using camera, microphone, or scales is not appropriate considering privacy in a lavatory. In this paper, we focus on the difference in the way of pulling a toilet paper roll and propose a system that identifies individuals based on features of rotation of a toilet paper roll with a gyroscope. The evaluation results confirmed that 85.8% accuracy was achieved for a five-people group in a laboratory environment.", "title": "" }, { "docid": "3d8a102c53c6e594e01afc7ad685c7ab", "text": "As register allocation is one of the most important phases in optimizing compilers, much work has been done to improve its quality and speed. We present a novel register allocation architecture for programs in SSA-form which simplifies register allocation significantly. We investigate certain properties of SSA-programs and their interference graphs, showing that they belong to the class of chordal graphs. This leads to a quadratic-time optimal coloring algorithm and allows for decoupling the tasks of coloring, spilling and coalescing completely. After presenting heuristic methods for spilling and coalescing, we compare our coalescing heuristic to an optimal method based on integer linear programming.", "title": "" }, { "docid": "e0223a5563e107308c88a43df5b1c8ba", "text": "One question central to Reinforcement Learning is how to learn a feature representation that supports algorithm scaling and re-use of learned information from different tasks. Successor Features approach this problem by learning a feature representation that satisfies a temporal constraint. We present an implementation of an approach that decouples the feature representation from the reward function, making it suitable for transferring knowledge between domains. We then assess the advantages and limitations of using Successor Features for transfer.", "title": "" }, { "docid": "2f8a07428a5ba3b51f4c990d0de18370", "text": "Pain is a common and distressing symptom in critically ill patients. Uncontrolled pain places patients at risk for numerous adverse psychological and physiological consequences, some of which may be life-threatening. A systematic assessment of pain is difficult in intensive care units because of the high percentage of patients who are noncommunicative and unable to self-report pain. Several tools have been developed to identify objective measures of pain, but the best tool has yet to be identified. A comprehensive search on the reliability and validity of observational pain scales indicated that although the Critical-Care Pain Observation Tool was superior to other tools in reliably detecting pain, pain assessment in individuals incapable of spontaneous neuromuscular movements or in patients with concurrent conditions, such as chronic pain or delirium, remains an enigma.", "title": "" }, { "docid": "69566105ef6c731e410e21e8ad6d5749", "text": "Despite advances in fingerprint matching, partial/incomplete/fragmentary fingerprint recognition remains a challenging task. While miniaturization of fingerprint scanners limits the capture of only part of the fingerprint, there is also special interest in processing latent fingerprints which are likely to be partial and of low quality. Partial fingerprints do not include all the structures available in a full fingerprint, hence a suitable matching technique which is independent of specific fingerprint features is required. Common fingerprint recognition methods are based on fingerprint minutiae which do not perform well when applied to low quality images and might not even be suitable for partial fingerprint recognition. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel- wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods.The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score such that it is less affected by distribution skew of the local similarities. Extensive experiments over Fingerprint Verification Competition (FVC) data set proves the superiority of the proposed method compared to other techniques in literature.", "title": "" }, { "docid": "1196ab65ddfcedb8775835f2e176576f", "text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.", "title": "" }, { "docid": "dfa611e19a3827c66ea863041a3ef1e2", "text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.", "title": "" }, { "docid": "cd587b4f35290bf779b0c7ee0214ab72", "text": "Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention.In this work we make a surprising claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature.We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster some streaming time series datasets.", "title": "" }, { "docid": "c2fb2e46eea33dcf9ec1872de5d57272", "text": "Computational Drug Discovery, which uses computational techniques to facilitate and improve the drug discovery process, has aroused considerable interests in recent years. Drug Repositioning (DR) and DrugDrug Interaction (DDI) prediction are two key problems in drug discovery and many computational techniques have been proposed for them in the last decade. Although these two problems have mostly been researched separately in the past, both DR and DDI can be formulated as the problem of detecting positive interactions between data entities (DR is between drug and disease, and DDI is between pairwise drugs). The challenge in both problems is that we can only observe a very small portion of positive interactions. In this paper, we propose a novel framework called Dyadic PositiveUnlabeled learning (DyPU) to solve the problem of detecting positive interactions. DyPU forces positive data pairs to rank higher than the average score of unlabeled data pairs. Moreover, we also derive the dual formulation of the proposed method with the rectifier scoring function and we show that the associated non-trivial proximal operator admits a closed form solution. Extensive experiments are conducted on real drug data sets and the results show that our method achieves superior performance comparing with the state-of-the-art.", "title": "" }, { "docid": "64baa8b11855ad6333ae67f18c6b56b0", "text": "The covariance matrix adaptation evolution strategy (CMA-ES) rates among the most successful evolutionary algorithms for continuous parameter optimization. Nevertheless, it is plagued with some drawbacks like the complexity of the adaptation process and the reliance on a number of sophisticatedly constructed strategy parameter formulae for which no or little theoretical substantiation is available. Furthermore, the CMA-ES does not work well for large population sizes. In this paper, we propose an alternative – simpler – adaptation step of the covariance matrix which is closer to the ”traditional” mutative self-adaptation. We compare the newly proposed algorithm, which we term the CMSA-ES, with the CMA-ES on a number of different test functions and are able to demonstrate its superiority in particular for large population sizes.", "title": "" }, { "docid": "a6fd8b8506a933a7cc0530c6ccda03a8", "text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.", "title": "" }, { "docid": "b6376259827dfc04f7c7c037631443f3", "text": "In this brief, a low-power flip-flop (FF) design featuring an explicit type pulse-triggered structure and a modified true single phase clock latch based on a signal feed-through scheme is presented. The proposed design successfully solves the long discharging path problem in conventional explicit type pulse-triggered FF (P-FF) designs and achieves better speed and power performance. Based on post-layout simulation results using TSMC CMOS 90-nm technology, the proposed design outperforms the conventional P-FF design data-close-to-output (ep-DCO) by 8.2% in data-to-Q delay. In the mean time, the performance edges on power and power- delay-product metrics are 22.7% and 29.7%, respectively.", "title": "" }, { "docid": "5a7568e877d5e1c2f2c50f98e95c5471", "text": "This paper presents an efficient method for finding matches to a given regular expression in given text using FPGAs. To match a regular expression of length n, a serial machine requires 0(2^n) memory and takes 0(1) time per text character. The proposed approach reqiures only 0(n^2) space and still process a text character in 0(1) time (one clock cycle).The improvement is due to the Nondetermineistic Finite Automaton (NFA) used to perform the matching. As far as the authors are aware, this is the first prctical use of a nondeterministic state machine on programmable logic. Furthermore, the paper presents a simple, fast algorithm that quickly constructs the NFA for the given regular expression. Fast NFA construction is crucial because the NFA structure depends on the regular expression, which is known only at runtime. Implementations of the algorithm for conventional FPGAs and the self-reconfigurable Gate Array (SRGA) are described. To evaluate performance, the NFA logic was mapped onto the Virtex XCV100 FPGA and the SRGA. Also, the performance of GNU grep for matching regular expressions was evaluated on an 800 MHz Pentium III machine. The proposed approach was faster than best case grep performance in most cases. It was orders of magnitude faster than worst case grep performance. Logic for the largest NFA considered fit in less than a 1000 CLBs while DFA storage for grep in the worst case consumed a few hundred megabytes.", "title": "" }, { "docid": "109644763e3a5ee5f59ec8e83719cc8d", "text": "The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research.", "title": "" }, { "docid": "353bfff6127e57660a918d4120ccf3d3", "text": "Deep learning techniques have demonstrated significant capacity in modeling some of the most challenging real world problems of high complexity. Despite the popularity of deep models, we still strive to better understand the underlying mechanism that drives their success. Motivated by observations that neurons in trained deep nets predict variation explaining factors indirectly related to the training tasks, we recognize that a deep network learns representations more general than the task at hand in order to disentangle impacts of multiple confounding factors governing the data, isolate the effects of the concerning factors, and optimize the given objective. Consequently, we propose to augment training of deep models with auxiliary information on explanatory factors of the data, in an effort to boost this disentanglement. Such deep networks, trained to comprehend data interactions and distributions more accurately, possess improved generalizability and compute better feature representations. Since pose is one of the most dominant confounding factors for object recognition, we adopt this principle to train a pose-aware deep convolutional neural network to learn both the class and pose of an object, so that it can make more informed classification decisions taking into account image variations induced by the object pose. We demonstrate that auxiliary pose information improves the classification accuracy in our experiments on Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) tasks. This general principle is readily applicable to improve the recognition and classification performance in various deep-learning applications.", "title": "" } ]
scidocsrr
2a6609f28ccd04f9de7c4e9b02837b33
A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel
[ { "docid": "7c05ef9ac0123a99dd5d47c585be391c", "text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.", "title": "" }, { "docid": "16186ff81d241ecaea28dcf5e78eb106", "text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.", "title": "" } ]
[ { "docid": "3475d98ae13c4bab3424103f009f3fb1", "text": "According to a small, lightweight, low-cost high performance inertial Measurement Units(IMU), an effective calibration method is implemented to evaluate the performance of Micro-Electro-Mechanical Systems(MEMS) sensors suffering from various errors to get acceptable navigation results. A prototype development board based on FPGA, dual core processor's configuration for INS/GPS integrated navigation system is designed for experimental testing. The significant error sources of IMU such as bias, scale factor, and misalignment are estimated in virtue of static tests, rate tests, thermal tests. Moreover, an effective intelligent calibration method combining with Kalman Filter is proposed to estimate parameters and compensate errors. The proposed approach has been developed and its efficiency is demonstrated by various experimental scenarios with real MEMS data.", "title": "" }, { "docid": "41c317b0e275592ea9009f3035d11a64", "text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.", "title": "" }, { "docid": "363cc184a6cae8b7a81744676e339a80", "text": "Dismissing-avoidant adults are characterized by expressing relatively low levels of attachment-related distress. However, it is unclear whether this reflects a relative absence of covert distress or an attempt to conceal covert distress. Two experiments were conducted to distinguish between these competing explanations. In Experiment 1, participants were instructed to suppression resulted in a decrease in the accessibility of abandonment-related thoughts for dismissing-avoidant adults. Experiment 2 demonstrated that attempts to suppress the attachment system resulted in decreases in physiological arousal for dismissing-avoidant adults. These experiments indicate that dismissing-avoidant adults are capable of suppressing the latent activation of their attachment system and are not simply concealing latent distress. The discussion focuses on development, cognitive, and social factors that may promote detachment.", "title": "" }, { "docid": "329ab44195e7c20e696e5d7edc8b65a8", "text": "In this work, we consider challenges relating to security for Industrial Control Systems (ICS) in the context of ICS security education and research targeted both to academia and industry. We propose to address those challenges through gamified attack training and countermeasure evaluation. We tested our proposed ICS security gamification idea in the context of the (to the best of our knowledge) first Capture-The-Flag (CTF) event targeted to ICS security called SWaT Security Showdown (S3). Six teams acted as attackers in a security competition leveraging an ICS testbed, with several academic defense systems attempting to detect the ongoing attacks. The event was conducted in two phases. The online phase (a jeopardy-style CTF) served as a training session. The live phase was structured as an attack-defense CTF. We acted as judges and we assigned points to the attacker teams according to a scoring system that we developed internally based on multiple factors, including realistic attacker models. We conclude the paper with an evaluation and discussion of the S3, including statistics derived from the data collected in each phase of S3.", "title": "" }, { "docid": "6825c5294da2dfe7a26b6ac89ba8f515", "text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.", "title": "" }, { "docid": "fed23432144a6929c4f3442b10157771", "text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa", "title": "" }, { "docid": "85c4c0ffb224606af6bc3af5411d31ca", "text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.", "title": "" }, { "docid": "404fce3f101d0a1d22bc9afdf854b1e0", "text": "The intimate connection between the brain and the heart was enunciated by Claude Bernard over 150 years ago. In our neurovisceral integration model we have tried to build on this pioneering work. In the present paper we further elaborate our model. Specifically we review recent neuroanatomical studies that implicate inhibitory GABAergic pathways from the prefrontal cortex to the amygdala and additional inhibitory pathways between the amygdala and the sympathetic and parasympathetic medullary output neurons that modulate heart rate and thus heart rate variability. We propose that the default response to uncertainty is the threat response and may be related to the well known negativity bias. We next review the evidence on the role of vagally mediated heart rate variability (HRV) in the regulation of physiological, affective, and cognitive processes. Low HRV is a risk factor for pathophysiology and psychopathology. Finally we review recent work on the genetics of HRV and suggest that low HRV may be an endophenotype for a broad range of dysfunctions.", "title": "" }, { "docid": "6ce3156307df03190737ee7c0ae24c75", "text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.", "title": "" }, { "docid": "f153ee3853f40018ed0ae8b289b1efcf", "text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.", "title": "" }, { "docid": "308622daf5f4005045f3d002f5251f8c", "text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.", "title": "" }, { "docid": "9d2f569d1105bdac64071541eb01c591", "text": "1. Outline the principles of the diagnostic tests used to confirm brain death. . 2. The patient has been certified brain dead and her relatives agree with her previously stated wishes to donate her organs for transplantation. Outline the supportive measures which should be instituted to maintain this patient’s organs in an optimal state for subsequent transplantation of the heart, lungs, liver and kidneys.", "title": "" }, { "docid": "01a649c8115810c8318e572742d9bd00", "text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.", "title": "" }, { "docid": "1f20204533ade658723cc56b429d5792", "text": "ILQUA first participated in TREC QA main task in 2003. This year we have made modifications to the system by removing some components with poor performance and enhanced the system with new methods and new components. The newly built ILQUA is an IE-driven QA system. To answer “Factoid” and “List” questions, we apply our answer extraction methods on NE-tagged passages. The answer extraction methods adopted here are surface text pattern matching, n-gram proximity search and syntactic dependency matching. Surface text pattern matching has been applied in some previous TREC QA systems. However, the patterns used in ILQUA are automatically generated by a supervised learning system and represented in a format of regular expressions which can handle up to 4 question terms. N-gram proximity search and syntactic dependency matching are two steps of one component. N-grams of question terms are matched around every named entity in the candidate passages and a list of named entities are generated as answer candidate. These named entities go through a multi-level syntactic dependency matching until a final answer is generated. To answer “Other” questions, we parse the answer sentences of “Other” questions in 2004 main task and built syntactic patterns combined with semantic features. These patterns are applied to the parsed candidate sentences to extract answers of “Other” questions. The evaluation results showed ILQUA has reached an accuracy of 30.9% for factoid questions. ILQUA is an IE-driven QA system without any pre-compiled knowledge base of facts and it doesn’t get reference from any other external search engine such as Google. The disadvantage of an IE-driven QA system is that there are some types of questions that can’t be answered because the answer in the passages can’t be tagged as appropriate NE types. Figure 1 shows the diagram of the ILQUA architecture.", "title": "" }, { "docid": "73333ad599c6bbe353e46d7fd4f51768", "text": "The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research–brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.", "title": "" }, { "docid": "0c9bbeaa783b2d6270c735f004ecc47f", "text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu.", "title": "" }, { "docid": "f4edb4f6bc0d0e9b31242cf860f6692d", "text": "Search on the web is a delay process and it can be hard task especially for beginners when they attempt to use a keyword query language. Beginner (inexpert) searchers commonly attempt to find information with ambiguous queries. These ambiguous queries make the search engine returns irrelevant results. This work aims to get more relevant pages to query through query reformulation and expanding search space. The proposed system has three basic parts WordNet, Google search engine and Genetic Algorithm. Every part has a special task. The system uses WordNet to remove ambiguity from queries by displaying the meaning of every keyword in user query and selecting the proper meaning for keywords. The system obtains synonym for every keyword from WordNet and generates query list. Genetic algorithm is used to create generation for every query in query list. Every query in system is navigated using Google search engine to obtain results from group of documents on the Web. The system has been tested on number of ambiguous queries and it has obtained more relevant URL to user query especially when the query has one keyword. The results are promising and therefore open further research directions.", "title": "" }, { "docid": "29d2a613f7da6b99e35eb890d590f4ca", "text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.", "title": "" }, { "docid": "5873204bba0bd16262274d4961d3d5f9", "text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.", "title": "" } ]
scidocsrr
7a6c13536dd2b138cdfdf822f28d8869
A lightweight active service migration framework for computational offloading in mobile cloud computing
[ { "docid": "0e55e64ddc463d0ea151de8efe40183f", "text": "Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems. & 2013 Published by Elsevier Ltd.", "title": "" }, { "docid": "aa18c10c90af93f38c8fca4eff2aab09", "text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.", "title": "" } ]
[ { "docid": "7f799fbe03849971cb3272e35e7b13db", "text": "Text often expresses the writer's emotional state or evokes emotions in the reader. The nature of emotional phenomena like reading and writing can be interpreted in different ways and represented with different computational models. Affective computing (AC) researchers often use a categorical model in which text data is associated with emotional labels. We introduce a new way of using normative databases as a way of processing text with a dimensional model and compare it with different categorical approaches. The approach is evaluated using four data sets of texts reflecting different emotional phenomena. An emotional thesaurus and a bag-­‐of-­‐words model are used to generate vectors for each pseudo-­‐ document, then for the categorical models three dimensionality reduction techniques are evaluated: Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Non-­‐negative Matrix Factorization (NMF). For the dimensional model a normative database is used to produce three-­‐dimensional vectors (valence, arousal, dominance) for each pseudo-­‐document. This 3-­‐dimensional model can be used to generate psychologically driven visualizations. Both models can be used for affect detection based on distances amongst categories and pseudo-­‐documents. Experiments show that the categorical model using NMF and the dimensional model tend to perform best. 1. INTRODUCTION Emotions and affective states are pervasive in all forms of communication, including text based, and increasingly recognized as important to understanding the full meaning that a message conveys, or the impact it will have on readers. Given the increasing amounts of textual communication being produced (e.g. emails, user created content, published content) researchers are seeking automated language processing techniques that include models of emotions. Emotions and other affective states (e.g. moods) have been studied by many disciplines. Affect scientists have studied emotions since Darwin (Darwin, 1872), and different schools within psychology have produced different theories representing different ways of interpreting affective phenomena (comprehensively reviewed in Davidson, Scherer and Goldsmith, 2003). In the last decade technologists have also started contributing to this research. Affective Computing (AC) in particular is contributing new ways to improve communication between the sensitive human and the unemotionally computer. AC researchers have developed computational systems that recognize and respond to the affective states of the user (Calvo and D'Mello, 2010). Affect-­‐sensitive user interfaces are being developed in a number of domains including gaming, mental health, and learning technologies. The basic tenet behind most AC systems is that automatically recognizing and responding to a user's affective states during interactions with a computer, …", "title": "" }, { "docid": "74dead8ad89ae4a55105fb7ae95d3e20", "text": "Improved health is one of the many reasons people choose to adopt a vegetarian diet, and there is now a wealth of evidence to support the health benefi ts of a vegetarian diet. Abstract: There is now a significant amount of research that demonstrates the health benefits of vegetarian and plant-based diets, which have been associated with a reduced risk of obesity, diabetes, heart disease, and some types of cancer as well as increased longevity. Vegetarian diets are typically lower in fat, particularly saturated fat, and higher in dietary fiber. They are also likely to include more whole grains, legumes, nuts, and soy protein, and together with the absence of red meat, this type of eating plan may provide many benefits for the prevention and treatment of obesity and chronic health problems, including diabetes and cardiovascular disease. Although a well-planned vegetarian or vegan diet can meet all the nutritional needs of an individual, it may be necessary to pay particular attention to some nutrients to ensure an adequate intake, particularly if the person is on a vegan diet. This article will review the evidence for the health benefits of a vegetarian diet and also discuss strategies for meeting the nutritional needs of those following a vegetarian or plant-based eating pattern.", "title": "" }, { "docid": "84d8058c67870f8606b485e7ad430c58", "text": "Stanford typed dependencies are a widely desired representation of natural language sentences, but parsing is one of the major computational bottlenecks in text analysis systems. In light of the evolving definition of the Stanford dependencies and developments in statistical dependency parsing algorithms, this paper revisits the question of Cer et al. (2010): what is the tradeoff between accuracy and speed in obtaining Stanford dependencies in particular? We also explore the effects of input representations on this tradeoff: part-of-speech tags, the novel use of an alternative dependency representation as input, and distributional representaions of words. We find that direct dependency parsing is a more viable solution than it was found to be in the past. An accompanying software release can be found at: http://www.ark.cs.cmu.edu/TBSD", "title": "" }, { "docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184", "text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.", "title": "" }, { "docid": "ced3a56c5469528e8fa5784dc0fff5d4", "text": "This paper explores the relation between a set of behavioural information security governance factors and employees’ information security awareness. To enable statistical analysis between proposed relations, data was collected from two different samples in 24 organisations: 24 information security executives and 240 employees. The results reveal that having a formal unit with explicit responsibility for information security, utilizing coordinating committees, and sharing security knowledge through an intranet site significantly correlates with dimensions of employees’ information security awareness. However, regular identification of vulnerabilities in information systems and related processes is significantly negatively correlated with employees’ information security awareness, in particular managing passwords. The effect of behavioural information security governance on employee information security awareness is an understudied topic. Therefore, this study is explorative in nature and the results are preliminary. Nevertheless, the paper provides implications for both research and practice.", "title": "" }, { "docid": "6e923a586a457521e9de9d4a9cab77ad", "text": "We present a new approach to the matting problem which splits the task into two steps: interactive trimap extraction followed by trimap-based alpha matting. By doing so we gain considerably in terms of speed and quality and are able to deal with high resolution images. This paper has three contributions: (i) a new trimap segmentation method using parametric max-flow; (ii) an alpha matting technique for high resolution images with a new gradient preserving prior on alpha; (iii) a database of 27 ground truth alpha mattes of still objects, which is considerably larger than previous databases and also of higher quality. The database is used to train our system and to validate that both our trimap extraction and our matting method improve on state-of-the-art techniques.", "title": "" }, { "docid": "0ad68f20acf338f4051a93ba5e273187", "text": "FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.", "title": "" }, { "docid": "105f34c3fa2d4edbe83d184b7cf039aa", "text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.", "title": "" }, { "docid": "b7eb2c65c459c9d5776c1e2cba84706c", "text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.", "title": "" }, { "docid": "220acd23ebb9c69cfb9ee00b063468c6", "text": "This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.", "title": "" }, { "docid": "7b25d1c4d20379a8a0fabc7398ea2c28", "text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.", "title": "" }, { "docid": "b8700283c7fb65ba2e814adffdbd84f8", "text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "5bf9aeb37fc1a82420b2ff4136f547d0", "text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.", "title": "" }, { "docid": "fc3c4f6c413719bbcf7d13add8c3d214", "text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.", "title": "" }, { "docid": "f489e2c0d6d733c9e2dbbdb1d7355091", "text": "In many signal processing applications, the signals provided by the sensors are mixtures of many sources. The problem of separation of sources is to extract the original signals from these mixtures. A new algorithm, based on ideas of backpropagation learning, is proposed for source separation. No a priori information on the sources themselves is required, and the algorithm can deal even with non-linear mixtures. After a short overview of previous works in that eld, we will describe the proposed algorithm. Then, some experimental results will be discussed.", "title": "" }, { "docid": "e5261ee5ea2df8bae7cc82cb4841dea0", "text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "title": "" }, { "docid": "22c72f94040cd65dde8e00a7221d2432", "text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.", "title": "" }, { "docid": "ca655b741316e8c65b6b7590833396e1", "text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "69b3275cb4cae53b3a8888e4fe7f85f7", "text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.", "title": "" } ]
scidocsrr
aab4d0acc19c2e8c86480233f7bc7d40
Unmanned aerial vehicle smart device ground control station cyber security threat model
[ { "docid": "3d78d929b1e11b918119abba4ef8348d", "text": "Recent developments in mobile technologies have produced a new kind of device, a programmable mobile phone, the smartphone. Generally, smartphone users can program any application which is customized for needs. Furthermore, they can share these applications in online market. Therefore, smartphone and its application are now most popular keywords in mobile technology. However, to provide these customized services, smartphone needs more private information and this can cause security vulnerabilities. Therefore, in this work, we analyze security of smartphone based on its environments and describe countermeasures.", "title": "" } ]
[ { "docid": "f6472cbb2beb8f36a3473759951a1cfa", "text": "Hair highlighting procedures are very common throughout the world. While rarely reported, potential adverse events to such procedures include allergic and irritant contact dermatitis, thermal burns, and chemical burns. Herein, we report two cases of female adolescents who underwent a hair highlighting procedure at local salons and sustained a chemical burn to the scalp. The burn etiology, clinical and histologic features, the expected sequelae, and a review of the literature are described.", "title": "" }, { "docid": "e28feb56ebc33a54d13452a2ea3a49f7", "text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona pyan@email.arizona.edu; {hchen, zeng}@eller.arizona.edu", "title": "" }, { "docid": "83cfa05fc29b4eb4eb7b954ba53498f5", "text": "Smartphones, the devices we carry everywhere with us, are being heavily tracked and have undoubtedly become a major threat to our privacy. As “Tracking the trackers” has become a necessity, various static and dynamic analysis tools have been developed in the past. However, today, we still lack suitable tools to detect, measure and compare the ongoing tracking across mobile OSs. To this end, we propose MobileAppScrutinator, based on a simple yet efficient dynamic analysis approach, that works on both Android and iOS (the two most popular OSs today). To demonstrate the current trend in tracking, we select 140 most representative Apps available on both Android and iOS AppStores and test them with MobileAppScrutinator. In fact, choosing the same set of apps on both Android and iOS also enables us to compare the ongoing tracking on these two OSs. Finally, we also discuss the effectiveness of privacy safeguards available on Android and iOS. We show that neither Android nor iOS privacy safeguards in their present state are completely satisfying.", "title": "" }, { "docid": "d43dc521d3f0f17ccd4840d6081dcbfe", "text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.", "title": "" }, { "docid": "613f0bf05fb9467facd2e58b70d2b09e", "text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.", "title": "" }, { "docid": "f93b332ba576d1095ba33e976db5cab0", "text": "Recent publications have argued that the welfare state is an important determinant of population health, and that social democracy in office and higher levels of health expenditure promote health progress. In the period 1950-2000, Greece, Portugal, and Spain were the poorest market economies in Europe, with a fragmented system of welfare provision, and many years of military or authoritarian right-wing regimes. In contrast, the five Nordic countries were the richest market economies in Europe, governed mostly by center or center-left coalitions often including the social democratic parties, and having a generous and universal welfare state. In spite of the socioeconomic and political differences, and a large gap between the five Nordic and the three southern nations in levels of health in 1950, population health indicators converged among these eight countries. Mean decadal gains in longevity of Portugal and Spain between 1950 and 2000 were almost three times greater than gains in Denmark, and about twice as great as those in Iceland, Norway and Sweden during the same period. All this raises serious doubts regarding the hypothesis that the political regime, the political party in office, the level of health care spending, and the type of welfare state exert major influences on population health. Either these factors are not major determinants of mortality decline, or their impact on population health in Nordic countries was more than offset by other health-promoting factors present in Southern Europe.", "title": "" }, { "docid": "e7ce1d8ecab61d0a414223426e114a46", "text": "Sentence ordering is a general and critical task for natural language generation applications. Previous works have focused on improving its performance in an external, downstream task, such as multi-document summarization. Given its importance, we propose to study it as an isolated task. We collect a large corpus of academic texts, and derive a data driven approach to learn pairwise ordering of sentences, and validate the efficacy with extensive experiments. Source codes1 and dataset2 of this paper will be made publicly available.", "title": "" }, { "docid": "6140255e69aa292bf8c97c9ef200def7", "text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.", "title": "" }, { "docid": "3b9e33ca0f2e479c58e3290f5c3ee2d5", "text": "BACKGROUND\nCardiac complications due to iron overload are the most common cause of death in patients with thalassemia major. The aim of this study was to compare iron chelation effects of deferoxamine, deferasirox, and combination of deferoxamine and deferiprone on cardiac and liver iron load measured by T2* MRI.\n\n\nMETHODS\nIn this study, 108 patients with thalassemia major aged over 10 years who had iron overload in cardiac T2* MRI were studied in terms of iron chelators efficacy on the reduction of myocardial siderosis. The first group received deferoxamine, the second group only deferasirox, and the third group, a combination of deferoxamine and deferiprone. Myocardial iron was measured at baseline and 12 months later through T2* MRI technique.\n\n\nRESULTS\nThe three groups were similar in terms of age, gender, ferritin level, and mean myocardial T2* at baseline. In the deferoxamine group, myocardial T2* was increased from 12.0±4.1 ms at baseline to 13.5±8.4 ms at 12 months (p=0.10). Significant improvement was observed in myocardial T2* of the deferasirox group (p<0.001). In the combined treatment group, myocardial T2* was significantly increased (p<0.001). These differences among the three groups were not significant at the 12 months. A significant improvement was observed in liver T2* at 12 months compared to baseline in the deferasirox and the combination group.\n\n\nCONCLUSION\nIn comparison to deferoxamine monotherapy, combination therapy and deferasirox monotherapy have a significant impact on reducing iron overload and improvement of myocardial and liver T2* MRI.", "title": "" }, { "docid": "18a317b8470b4006ccea0e436f54cfcd", "text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.", "title": "" }, { "docid": "08bef09a01414bafcbc778fea85a7c0a", "text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.", "title": "" }, { "docid": "71734f09f053ede7b565047a55cca132", "text": "Researchers have paid considerable attention to natural user interfaces, especially sensing gestures and touches upon an un-instrumented surface from an overhead camera. We present a system that combines depth sensing from a Microsoft Kinect and temperature sensing from a thermal imaging camera to infer a variety of gestures and touches for controlling a natural user interface. The system, coined Dante, is capable of (1) inferring multiple touch points from multiple users (92.6% accuracy), (2) detecting and classifying each user using their depth and thermal footprint (87.7% accuracy), and (3) detecting touches on objects placed upon the table top (91.7% accuracy). The system can also classify the pressure of chording motions. The system is real time, with an average processing delay of 40 ms.", "title": "" }, { "docid": "6dd1df4e520f5858d48db9860efb63a7", "text": "This paper proposes single-phase direct pulsewidth modulation (PWM) buck-, boost-, and buck-boost-type ac-ac converters. The proposed converters are implemented with a series-connected freewheeling diode and MOSFET pair, which allows to minimize the switching and conduction losses of the semiconductor devices and resolves the reverse-recovery problem of body diode of MOSFET. The proposed converters are highly reliable because they can solve the shoot-through and dead-time problems of traditional ac-ac converters without voltage/current sensing module, lossy resistor-capacitor (RC) snubbers, or bulky coupled inductors. In addition, they can achieve high obtainable voltage gain and also produce output voltage waveforms of good quality because they do not use lossy snubbers. Unlike the recently developed switching cell (SC) ac-ac converters, the proposed ac-ac converters have no circulating current and do not require bulky coupled inductors; therefore, the total losses, current stresses, and magnetic volume are reduced and efficiency is improved. Detailed analysis and experimental results are provided to validate the novelty and merit of the proposed converters.", "title": "" }, { "docid": "67fd6424fc1aebe250b0fbf638a196b7", "text": "The World Health Organization's Ottawa Charter for Health Promotion has been influential in guiding the development of 'settings' based health promotion. Over the past decade, settings such as schools have flourished and there has been a considerable amount of academic literature produced, including theoretical papers, descriptive studies and evaluations. However, despite its central importance, the health-promoting general practice has received little attention. This paper discusses: the significance of this setting for health promotion; how a health promoting general practice can be created; effective health promotion approaches; the nursing contribution; and some challenges that need to be resolved. In order to become a health promoting general practice, the staff must undertake a commitment to fulfil the following conditions: create a healthy working environment; integrate health promotion into practice activities; and establish alliances with other relevant institutions and groups within the community. The health promoting general practice is the gold standard for health promotion. Settings that have developed have had the support of local, national and European networks. Similar assistance and advocacy will be needed in general practice. This paper recommends that a series of rigorously evaluated, high-quality pilot sites need to be established to identify and address potential difficulties, and to ensure that this innovative approach yields tangible health benefits for local communities. It also suggests that government support is critical to the future development of health promoting general practices. This will be needed both directly and in relation to the capacity and resourcing of public health in general.", "title": "" }, { "docid": "e81d3f48d7213720f489f52852cfbfa3", "text": "HE BRITISH ROCK GROUP Radiohead has carved out a unique place in the post-millennial rock milieu by tempering their highly experimental idiolect with structures more commonly heard in Top Forty rock styles. 1 In what I describe as a Goldilocks principle, much of their music after OK Computer (1997) inhabits a space between banal convention and sheer experimentation—a dichotomy which I have elsewhere dubbed the 'Spears–Stockhausen Continuum.' 2 In the timbral domain, the band often introduces sounds rather foreign to rock music such as the ondes Martenot and highly processed lead vocals within textures otherwise dominated by guitar, bass, and drums (e.g., 'The National Anthem,' 2000), and song forms that begin with paradigmatic verse–chorus structures often end with new material instead of a recapitulated chorus (e.g., 'All I Need,' 2007). In this T", "title": "" }, { "docid": "ebe5630a0fb36452e2c9e94a53ef073a", "text": "Imperforate hymen is uncommon, occurring in 0.1 % of newborn females. Non-syndromic familial occurrence of imperforate hymen is extremely rare and has been reported only three times in the English literature. The authors describe two cases in a family across two generations, one presenting with chronic cyclical abdominal pain and the other acutely. There were no other significant reproductive or systemic abnormalities in either case. Imperforate hymen occurs mostly in a sporadic manner, although rare familial cases do occur. Both the recessive and the dominant modes of transmission have been suggested. However, no genetic markers or mutations have been proven as etiological factors. Evaluating all female relatives of the affected patients at an early age can lead to early diagnosis and treatment in an asymptomatic case.", "title": "" }, { "docid": "abbb210122d470215c5a1d0420d9db06", "text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.", "title": "" }, { "docid": "3d8be6d4478154bc711d9cf241e7edb5", "text": "The use of multimedia technology to teach language in its authentic cultural context represents a double challenge for language learners and teachers. On the one hand, the computer gives learners access to authentic video footage and other cultural materials that can help them get a sense of the sociocultural context in which the language is used. On the other hand, CD-ROM multimedia textualizes this context in ways that need to be \"read\" and interpreted. Learners are thus faced with the double task of (a) observing and choosing culturally relevant features of the context and (b) putting linguistic features in relation to other features to arrive at some understanding of language in use. This paper analyzes the interaction of text and context in a multimedia Quechua language program, and makes suggestions for teaching foreign languages through multimedia technology.", "title": "" }, { "docid": "95296a02831a1f8fb50288503bea75ad", "text": "The Residual Network (ResNet), proposed in He et al. (2015a), utilized shortcut connections to significantly reduce the difficulty of training, which resulted in great performance boosts in terms of both training and generalization error. It was empirically observed in He et al. (2015a) that stacking more layers of residual blocks with shortcut 2 results in smaller training error, while it is not true for shortcut of length 1 or 3. We provide a theoretical explanation for the uniqueness of shortcut 2. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The shortcut 1, however, is essentially equivalent to no shortcuts, which has a condition number exploding to infinity as the number of layers grows. We further argue that as the number of layers tends to infinity, it suffices to only look at the loss function at the zero initial point. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with shortcut 2 achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.", "title": "" }, { "docid": "13beac4518bcbce5c0d68eb63e754474", "text": "Alternating direction methods are a common tool for general mathematical programming and optimization. These methods have become particularly important in the field of variational image processing, which frequently requires the minimization of non-differentiable objectives. This paper considers accelerated (i.e., fast) variants of two common alternating direction methods: the Alternating Direction Method of Multipliers (ADMM) and the Alternating Minimization Algorithm (AMA). The proposed acceleration is of the form first proposed by Nesterov for gradient descent methods. In the case that the objective function is strongly convex, global convergence bounds are provided for both classical and accelerated variants of the methods. Numerical examples are presented to demonstrate the superior performance of the fast methods for a wide variety of problems.", "title": "" } ]
scidocsrr
d7bf8a79235036e6858e9e8354089a9c
From Abstraction to Implementation: Can Computational Thinking Improve Complex Real-World Problem Solving? A Computational Thinking-Based Approach to the SDGs
[ { "docid": "b64a91ca7cdeb3dfbe5678eee8962aa7", "text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.", "title": "" } ]
[ { "docid": "c4b1615bbd32f99fa59ca2d7b8c40b10", "text": "Practical face recognition systems are sometimes confronted with low-resolution face images. Traditional two-step methods solve this problem through employing super-resolution (SR). However, these methods usually have limited performance because the target of SR is not absolutely consistent with that of face recognition. Moreover, time-consuming sophisticated SR algorithms are not suitable for real-time applications. To avoid these limitations, we propose a novel approach for LR face recognition without any SR preprocessing. Our method based on coupled mappings (CMs), projects the face images with different resolutions into a unified feature space which favors the task of classification. These CMs are learned through optimizing the objective function to minimize the difference between the correspondences (i.e., low-resolution image and its high-resolution counterpart). Inspired by locality preserving methods for dimensionality reduction, we introduce a penalty weighting matrix into our objective function. Our method significantly improves the recognition performance. Finally, we conduct experiments on publicly available databases to verify the efficacy of our algorithm.", "title": "" }, { "docid": "4798cb0bcd147e6a49135b845d7f2624", "text": "There is an upsurging interest in designing succinct data structures for basic searching problems (see [23] and references therein). The motivation has to be found in the exponential increase of electronic data nowadays available which is even surpassing the significant increase in memory and disk storage capacities of current computers. Space reduction is an attractive issue because it is also intimately related to performance improvements as noted by several authors (e.g. Knuth [15], Bentley [5]). In designing these implicit data structures the goal is to reduce as much as possible the auxiliary information kept together with the input data without introducing a significant slowdown in the final query performance. Yet input data are represented in their entirety thus taking no advantage of possible repetitiveness into them. The importance of those issues is well known to programmers who typically use various tricks to squeeze data as much as possible and still achieve good query performance. Their approaches, though, boil down to heuristics whose effectiveness is witnessed only by experimentation. In this paper, we address the issue of compressing and indexing data by studying it in a theoretical framework. We devise a novel data structure for indexing and searching whose space occupancy is a function of the entropy of the underlying data set. The novelty resides in the careful combination of a compression algorithm, proposed by Burrows and Wheeler [7], with the structural properties of a well known indexing tool, the Suffix Array [17]. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because a text T [1, u] is stored using O(Hk(T )) + o(1) bits per input symbol, where Hk(T ) is the kth order entropy of T (the bound holds for any fixed k). Given an arbitrary string P [1, p], the opportunistic data structure allows to search for the occ occurrences of P in T requiring O(p+occ log u) time complexity (for any fixed > 0). If data are uncompressible we achieve the best space bound currently known [11]; on compressible data our solution improves the succinct suffix array of [11] and the classical suffix tree and suffix array data structures either in space or in query time complexity or both. It is a belief [27] that some space overhead should be paid to use full-text indices (like suffix trees or suffix arrays) with respect to word-based indices (like inverted lists). The results in this paper show that a full-text index may achieve sublinear space overhead on compressible texts. As an application we devise a variant of the well-known Glimpse tool [18] which achieves sublinear space and sublinear query time complexity. Conversely, inverted lists achieve only the second goal [27], and classical Glimpse achieves both goals but under some restrictive conditions [4]. Finally, we investigate the modifiability of our opportunistic data structure by studying how to choreograph its basic ideas with a dynamic setting thus achieving effective searching and updating time bounds. ∗Dipartimento di Informatica, Università di Pisa, Italy. E-mail: ferragin@di.unipi.it. †Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale, Alessandria, Italy and IMC-CNR, Pisa, Italy. E-mail: manzini@mfn.unipmn.it.", "title": "" }, { "docid": "67af0ebeebec40efa792a010ce205890", "text": "We present a near-optimal polynomial-time approximation algorithm for the asymmetric traveling salesman problem for graphs of bounded orientable or non-orientable genus. Given any algorithm that achieves an approximation ratio of f(n) on arbitrary n-vertex graphs as a black box, our algorithm achieves an approximation factor of O(f(g)) on graphs with genus g. In particular, the O(log n/loglog n)-approximation algorithm for general graphs by Asadpour et al. [SODA 2010] immediately implies an O(log g/loglog g)-approximation algorithm for genus-g graphs. Moreover, recent results on approximating the genus of graphs imply that our O(log g/loglog g)-approximation algorithm can be applied to bounded-degree graphs even if no genus-g embedding of the graph is given. Our result improves and generalizes the o(√ g log g)-approximation algorithm of Oveis Gharan and Saberi [SODA 2011], which applies only to graphs with orientable genus g and requires a genus-g embedding as part of the input, even for bounded-degree graphs. Finally, our techniques yield a O(1)-approximation algorithm for ATSP on graphs of genus g with running time 2O(g) · nO(1).", "title": "" }, { "docid": "113b8cfda23cf7e8b3d7b4821d549bf7", "text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.", "title": "" }, { "docid": "62af709fd559596f6d3d7a52902d5da5", "text": "This paper presents the results of several large-scale studies of face recognition employing visible light and infra-red (IR) imagery in the context of principal component analysis. We find that in a scenario involving time lapse between gallery and probe, and relatively controlled lighting, (1) PCA-based recognition using visible light images outperforms PCA-based recognition using infra-red images, (2) the combination of PCA-based recognition using visible light and infra-red imagery substantially outperforms either one individually. In a same session scenario (i.e. nearsimultaneous acquisition of gallery and probe images) neither modality is significantly better than the other. These experimental results reinforce prior research that employed a smaller data set, presenting a convincing argument that, even across a broad experimental spectrum, the behaviors enumerated above are valid and consistent.", "title": "" }, { "docid": "82ca6a400bf287dc287df9fa751ddac2", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "715de052c6a603e3c8a572531920ecfa", "text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.", "title": "" }, { "docid": "903b68096d2559f0e50c38387260b9c8", "text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.", "title": "" }, { "docid": "154c40c2fab63ad15ded9b341ff60469", "text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.", "title": "" }, { "docid": "b456ef31418fbe2a82bac60045a57fc2", "text": "Continuous blood pressure (BP) monitoring in a noninvasive and unobtrusive way can significantly improve the awareness, control and treatment rate of prevalent hypertension. Pulse transit time (PTT) has become increasingly popular in recent years for continuous BP measurement without a cuff. However, the accuracy issue of PTT-based method remains to be solved for clinical application. Some previous studies have attempted to estimate BP with only PTT by using linear regression, which is susceptible to arterial regulation and may not reflect the actual relationship between PTT and BP. Furthermore, PTT does not contain all the information of BP variation, thereby resulting in unsatisfactory accuracy. In this paper we establish a cuffless BP estimation model from a physiological perspective by utilizing PTT and photoplethysmogram (PPG) intensity ratio (PIR), an indicator we have recently proposed for evaluation of the change in arterial diameter and the low frequency variation of BP, with the consideration that PIR can track changes in mean BP (MBP) and arterial diameter change. The performance of the proposed BP model was evaluated by comparing the estimated BP with Finapres BP as reference on 10 healthy subjects. The results showed that the mean ± standard deviation (SD) of the estimation error for systolic and diastolic BP were -0.41 ± 5.15 and -0.84 ± 4.05 mmHg, and mean absolute difference (MAD) were 4.18 and 3.43 mmHg, respectively. Furthermore, the proposed modeling method was superior to one contrast PTT-based method, demonstrating the proposed model would be promising for reliable continuous cuffless BP measurement.", "title": "" }, { "docid": "9876e4298f674a617f065f348417982a", "text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.", "title": "" }, { "docid": "bbb06abacfd8f4eb01fac6b11a4447bf", "text": "In this paper, we present a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm following an inertial assisted Kalman Filter and reusing the estimated 3D map. By leveraging an inertial assisted Kalman Filter, we achieve an efficient motion tracking bearing fast dynamic movement in the front-end. To enable place recognition and reduce the trajectory estimation drift, we construct a factor graph based non-linear optimization in the back-end. We carefully design a feedback mechanism to balance the front/back ends ensuring the estimation accuracy. We also propose a novel initialization method that accurately estimate the scale factor, the gravity, the velocity, and gyroscope and accelerometer biases in a very robust way. We evaluated the algorithm on a public dataset, when compared to other state-of-the-art monocular Visual-Inertial SLAM approaches, our algorithm achieves better accuracy and robustness in an efficient way. By the way, we also evaluate our algorithm in a MonocularInertial setup with a low cost IMU to achieve a robust and lowdrift realtime SLAM system.", "title": "" }, { "docid": "85ccad436c7e7eed128825e3946ae0ef", "text": "Recent research has made great strides in the field of detecting botnets. However, botnets of all kinds continue to plague the Internet, as many ISPs and organizations do not deploy these techniques. We aim to mitigate this state by creating a very low-cost method of detecting infected bot host. Our approach is to leverage the botnet detection work carried out by some organizations to easily locate collaborating bots elsewhere. We created BotMosaic as a countermeasure to IRC-based botnets. BotMosaic relies on captured bot instances controlled by a watermarker, who inserts a particular pattern into their network traffic. This pattern can then be detected at a very low cost by client organizations and the watermark can be tuned to provide acceptable false-positive rates. A novel feature of the watermark is that it is inserted collaboratively into the flows of multiple captured bots at once, in order to ensure the signal is strong enough to be detected. BotMosaic can also be used to detect stepping stones and to help trace back to the botmaster. It is content agnostic and can operate on encrypted traffic. We evaluate BotMosaic using simulations and a testbed deployment.", "title": "" }, { "docid": "6573629e918822c0928e8cf49f20752c", "text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.", "title": "" }, { "docid": "ef9437b03a95fc2de438fe32bd2e32b9", "text": "and Creative Modeling Modeling is not simply a process of response mimicry as commonly believed. Modeled judgments and actions may differ in specific content but embody the same rule. For example, a model may deal with moral dilemmas that differ widely in the nature of the activity but apply the same moral standard to them. Modeled activities thus convey rules for generative and innovative behavior. This higher level learning is achieved through abstract modeling. Once observers extract the rules underlying the modeled activities they can generate new behaviors that go beyond what they have seen or heard. Creativeness rarely springs entirely from individual inventiveness. A lot of modeling goes on in creativity. By refining preexisting innovations, synthesizing them into new ways and adding novel elements to them something new is created. When exposed to models of differing styles of thinking and behaving, observers vary in what they adopt from the different sources and thereby create new blends of personal characteristics that differ from the individual models (Bandura, Ross & Ross, 1963). Modeling influences that exemplify new perspectives and innovative styles of thinking also foster creativity by weakening conventional mind sets (Belcher, 1975; Harris & Evans, 1973).", "title": "" }, { "docid": "b2aec3f88af47e47b4ca60493895cb8e", "text": "In this paper, a simple but efficient approach for blind image splicing detection is proposed. Image splicing is a common and fundamental operation used for image forgery. The detection of image splicing is a preliminary but desirable study for image forensics. Passive detection approaches of image splicing are usually regarded as pattern recognition problems based on features which are sensitive to splicing. In the proposed approach, we analyze the discontinuity of image pixel correlation and coherency caused by splicing in terms of image run-length representation and sharp image characteristics. The statistical features extracted from image run-length representation and image edge statistics are used for splicing detection. The support vector machine (SVM) is used as the classifier. Our experimental results demonstrate that the two proposed features outperform existing ones both in detection accuracy and computational complexity.", "title": "" }, { "docid": "525ddfaae4403392e8817986f2680a68", "text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" }, { "docid": "e3051e92e84c69f999c09fe751c936f0", "text": "Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.", "title": "" }, { "docid": "19a538b6a49be54b153b0a41b6226d1f", "text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.", "title": "" } ]
scidocsrr
50e30807cc5bac0a89ecac10859ef6c9
Metamorphic Testing and Testing with Special Values
[ { "docid": "421cb7fb80371c835a5d314455fb077c", "text": "This paper explains, in an introductory fashion, the method of specifying the correct behavior of a program by the use of input/output assertions and describes one method for showing that the program is correct with respect to those assertions. An initial assertion characterizes conditions expected to be true upon entry to the program and a final assertion characterizes conditions expected to be true upon exit from the program. When a program contains no branches, a technique known as symbolic execution can be used to show that the truth of the initial assertion upon entry guarantees the truth of the final assertion upon exit. More generally, for a program with branches one can define a symbolic execution tree. If there is an upper bound on the number of times each loop in such a program may be executed, a proof of correctness can be given by a simple traversal of the (finite) symbolic execution tree. However, for most programs, no fixed bound on the number of times each loop is executed exists and the corresponding symbolic execution trees are infinite. In order to prove the correctness of such programs, a more general assertion structure must be provided. The symbolic execution tree of such programs must be traversed inductively rather than explicitly. This leads naturally to the use of additional assertions which are called \"inductive assertions.\"", "title": "" } ]
[ { "docid": "f79e5a2b19bb51e8dc0017342a153fee", "text": "Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks. However, the anonymity of Bitcoin is fragile. Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first fullfledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy. However, Zerocash suffers two problems: poor scalability and low efficiency. In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel. First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure. Then we construct Z-Channel based on the improved Zerocash scheme. Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.", "title": "" }, { "docid": "28ab07763d682ae367b5c9ebd9c9ef13", "text": "Nowadays, the teaching-learning processes are constantly changing, one of the latest modifications promises to strengthen the development of digital skills and thinking in the participants, from an early age. In this sense, the present article shows the advances of a study oriented to the formation of programming abilities, computational thinking and collaborative learning in an initial education context. As part of the study it was initially proposed to conduct a training day for teachers who will participate in the experimental phase of the research, considering this human resource as a link of great importance to achieve maximum use of students in the development of curricular themes of the level, using ICT resources and programmable educational robots. The criterion and the positive acceptance expressed by the teaching group after the evaluation applied at the end of the session, constitute a good starting point for the development of the following activities that make up the research in progress.", "title": "" }, { "docid": "4e847c4acec420ef833a08a17964cb28", "text": "Machine learning models are vulnerable to adversarial examples, inputs maliciously perturbed to mislead the model. These inputs transfer between models, thus enabling black-box attacks against deployed models. Adversarial training increases robustness to attacks by injecting adversarial examples into training data. Surprisingly, we find that although adversarially trained models exhibit strong robustness to some white-box attacks (i.e., with knowledge of the model parameters), they remain highly vulnerable to transferred adversarial examples crafted on other models. We show that the reason for this vulnerability is the model’s decision surface exhibiting sharp curvature in the vicinity of the data points, thus hindering attacks based on first-order approximations of the model’s loss, but permitting black-box attacks that use adversarial examples transferred from another model. We harness this observation in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior “single-step” attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs transferred from a number of fixed pre-trained models. On MNIST and ImageNet, ensemble adversarial training vastly improves robustness to black-box attacks.", "title": "" }, { "docid": "b429b37623a690cd4b224a334985f7dd", "text": "Data centers play a key role in the expansion of cloud computing. However, the efficiency of data center networks is limited by oversubscription. The typical unbalanced traffic distributions of a DCN further aggravate the problem. Wireless networking, as a complementary technology to Ethernet, has the flexibility and capability to provide feasible approaches to handle the problem. In this article, we analyze the challenges of DCNs and articulate the motivations of employing wireless in DCNs. We also propose a hybrid Ethernet/wireless DCN architecture and a mechanism to dynamically schedule wireless transmissions based on traffic demands. Our simulation study demonstrates the effectiveness of the proposed wireless DCN.", "title": "" }, { "docid": "17db3273504bba730c9e43c8ea585250", "text": "In this paper, License plate localization and recognition (LPLR) is presented. It uses image processing and character recognition technology in order to identify the license number plates of the vehicles automatically. This system is considerable interest because of its good application in traffic monitoring systems, surveillance devices and all kind of intelligent transport system. The objective of this work is to design algorithm for License Plate Localization and Recognition (LPLR) of Tanzanian License Plates. The plate numbers used are standard ones with black and yellow or black and white colors. Also, the letters and numbers are placed in the same row (identical vertical levels), resulting in frequent changes in the horizontal intensity. Due to that, the horizontal changes of the intensity have been easily detected, since the rows that contain the number plates are expected to exhibit many sharp variations. Hence, the edge finding method is exploited to find the location of the plate. To increase readability of the plate number, part of the image was enhanced, noise removal and smoothing median filter is used due to easy development. The algorithm described in this paper is implemented using MATLAB 7.11.0(R2010b).", "title": "" }, { "docid": "080f29a336c0188eeec82d27aa80092c", "text": "Do physically attractive individuals truly possess a multitude of better characteristics? The current study aimed to answer the age old question, “Do looks matter?” within the context of online dating and framed itself using cursory research performed by Brand and colleagues (2012). Good Genes Theory, Halo Effect, Physical Attractiveness Stereotype, and Social Information Procession theory were also used to explore what function appearance truly plays in online dating and how it influences a user’s written text. 83 men were surveyed and asked to rate 84 women’s online dating profiles (photos and texts) independently of one another to determine if those who were perceived as physically attractive also wrote more attractive texts as well. Results indicated that physical attractiveness was correlated with text attractiveness but not with text confidence. Findings also indicated the more attractive a woman’s photo, the less discrepancy there was between her photo attractiveness and text attractiveness scores. Finally, photo attractiveness did not differ significantly for men’s ratings of women in this study and women’s ratings of men in the Brand et al. (2012) study.", "title": "" }, { "docid": "ce0cfd1dd69e235f942b2e7583b8323b", "text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web.  2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "09581c79829599090d8f838416058c05", "text": "This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets.", "title": "" }, { "docid": "112b9294f4d606a0112fe80742698184", "text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.", "title": "" }, { "docid": "945553f360d7f569f15d249dbc5fa8cd", "text": "One of the main issues in service collaborations among business partners is the possible lack of trust among them. A promising approach to cope with this issue is leveraging on blockchain technology by encoding with smart contracts the business process workflow. This brings the benefits of trust decentralization, transparency, and accountability of the service composition process. However, data in the blockchain are public, implying thus serious consequences on confidentiality and privacy. Moreover, smart contracts can access data outside the blockchain only through Oracles, which might pose new confidentiality risks if no assumptions are made on their trustworthiness. For these reasons, in this paper, we are interested in investigating how to ensure data confidentiality during business process execution on blockchain even in the presence of an untrusted Oracle.", "title": "" }, { "docid": "518cb733bfbb746315498c1409d118c5", "text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.", "title": "" }, { "docid": "b3fce50260d7f77e8ca294db9c6666f6", "text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).", "title": "" }, { "docid": "85d31f3940ee258589615661e596211d", "text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.", "title": "" }, { "docid": "4db8a0d39ef31b49f2b6d542a14b03a2", "text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.", "title": "" }, { "docid": "074de6f0c250f5c811b69598551612e4", "text": "In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on preprocessing such as expensive, and often coarse, scalar field conversion or mesh generation steps. Our method directly takes unsorted particle data as input. It can be further accelerated by identifying fully populated simulation cells during simulation. The extracted surface can be filtered to achieve smooth surface appearance.", "title": "" }, { "docid": "099dbf8d4c0b401cd3389583eb4495f3", "text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.", "title": "" }, { "docid": "848dd074e4615ea5ecb164c96fac6c63", "text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.", "title": "" }, { "docid": "5a805b6f9e821b7505bccc7b70fdd557", "text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.", "title": "" }, { "docid": "dc3de555216f10d84890ecb1165774ff", "text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.", "title": "" }, { "docid": "93c84b6abfe30ff7355e4efc310b440b", "text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.", "title": "" } ]
scidocsrr
b049d9544a7cee820b8df4f4b4fe1adc
Compact CPW-Fed Tri-Band Printed Antenna With Meandering Split-Ring Slot for WLAN/WiMAX Applications
[ { "docid": "237a88ea092d56c6511bb84604e6a7c7", "text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.", "title": "" }, { "docid": "7bc8be5766eeb11b15ea0aa1d91f4969", "text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.", "title": "" } ]
[ { "docid": "0b6f3498022abdf0407221faba72dcf1", "text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.", "title": "" }, { "docid": "c31dddbca92e13e84e08cca310329151", "text": "For the first time, automated Hex solvers have surpassed humans in their ability to solve Hex positions: they can now solve many 9×9 Hex openings. We summarize the methods that attained this milestone, and examine the future of Hex solvers.", "title": "" }, { "docid": "65ed76ddd6f7fd0aea717d2e2643dd16", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "26b67fe7ee89c941d313187672b1d514", "text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.", "title": "" }, { "docid": "1336b193e4884a024f21a384b265eac6", "text": "In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks – plan recognition and natural language understanding.", "title": "" }, { "docid": "529929af902100d25e08fe00d17e8c1a", "text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.", "title": "" }, { "docid": "ee61181cb9625868526eb608db0c58b4", "text": "The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.", "title": "" }, { "docid": "54a1257346f9a1ead514bb8077b0e7ca", "text": "Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.", "title": "" }, { "docid": "5785108e48e62ce2758a7b18559a697e", "text": "The objective of this article is to create a better understanding of the intersection of the academic fields of entrepreneurship and strategic management, based on an aggregation of the extant literature in these two fields. The article structures and synthesizes the existing scholarly works in the two fields, thereby generating new knowledge. The results can be used to further enhance fruitful integration of these two overlapping but separate academic fields. The article attempts to integrate the two fields by first identifying apparent interrelations, and then by concentrating in more detail on some important intersections, including strategic management in small and medium-sized enterprises and start-ups, acknowledging the central role of the entrepreneur. The content and process sides of strategic management are discussed as well as their important connecting link, the business plan. To conclude, implications and future research directions for the two fields are proposed.", "title": "" }, { "docid": "efde28bc545de68dbb44f85b198d85ff", "text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.", "title": "" }, { "docid": "5275184686a8453a1922cec7a236b66d", "text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.", "title": "" }, { "docid": "c75967795041ef900236d71328dd7936", "text": "In order to investigate the strategies used to plan and control multijoint arm trajectories, two-degrees-of-freedom arm movements performed by normal adult humans were recorded. Only the shoulder and elbow joints were active. When a subject was told simply to move his hand from one visual target to another, the path of the hand was roughly straight, and the hand speed profile of their straight trajectories was bell-shaped. When the subject was required to produce curved hand trajectories, the path usually had a segmented appearance, as if the subject was trying to approximate a curve with low curvature elements. Hand speed profiles associated with curved trajectories contained speed valleys or inflections which were temporally associated with the local maxima in the trajectory curvature. The mean duration of curved movements was longer than the mean for straight movements. These results are discussed in terms of trajectory control theories which have originated in the fields of mechanical manipulator control and biological motor control. Three explanations for the results are offered.", "title": "" }, { "docid": "06c4281aad5e95cac1f4525cbb90e5c7", "text": "Offering training programs to their employees is one of the necessary tasks that managers must comply with. Training is done mainly to provide upto-date knowledge or to convey to staff the objectives, history, corporate name, functions of the organization’s areas, processes, laws, norms or policies that must be fulfilled. Although there are a lot of methods, models or tools that are useful for this purpose, many companies face with some common problems like employee’s motivation and high costs in terms of money and time. In an effort to solve this problem, new trends have emerged in the last few years, in particular strategies related to games, such as serious games and gamification, whose success has been demonstrated by numerous researchers. According to the above, we present a systematic literature review of the different approaches that have used games or their elements, using the procedure suggested by Cooper, on this matter, ending with about the positive and negative findings.", "title": "" }, { "docid": "24d55c65807e4a90fb0dffb23fc2f7bc", "text": "This paper presents a comprehensive study of deep correlation features on image style classification. Inspired by that, correlation between feature maps can effectively describe image texture, and we design various correlations and transform them into style vectors, and investigate classification performance brought by different variants. In addition to intralayer correlation, interlayer correlation is proposed as well, and its effectiveness is verified. After showing the effectiveness of deep correlation features, we further propose a learning framework to automatically learn correlations between feature maps. Through extensive experiments on image style classification and artist classification, we demonstrate that the proposed learnt deep correlation features outperform several variants of convolutional neural network features by a large margin, and achieve the state-of-the-art performance.", "title": "" }, { "docid": "283d3f1ff0ca4f9c0a2a6f4beb1f7771", "text": "As a proof-of-concept for the vision “SSD as SQL Engine” (SaS in short), we demonstrate that SQLite [4], a popular mobile database engine, in its entirety can run inside a real SSD development platform. By turning storage device into database engine, SaS allows applications to directly interact with full SQL database server running inside storage device. In SaS, the SQL language itself, not the traditional dummy block interface, will be provided as new interface between applications and storage device. In addition, since SaS plays the role of the uni ed platform of database computing node and storage node, the host and the storage need not be segregated any more as separate physical computing components.", "title": "" }, { "docid": "62d39d41523bca97939fa6a2cf736b55", "text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.", "title": "" }, { "docid": "328aad76b94b34bf49719b98ae391cfe", "text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.", "title": "" }, { "docid": "a9a22c9c57e9ba8c3deefbea689258d5", "text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.", "title": "" }, { "docid": "c5f749c36b3d8af93c96bee59f78efe5", "text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.", "title": "" }, { "docid": "87eb54a981fca96475b73b3dfa99b224", "text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.", "title": "" } ]
scidocsrr
d0c05a044c6125d249b7c4de875fe40c
Energy efficient IoT-based smart home
[ { "docid": "a8dbb16b9a0de0dcae7780ffe4c0b7cf", "text": "Increased demands on implementation of wireless sensor networks in automation praxis result in relatively new wireless standard – ZigBee. The new workplace was established on the Department of Electronics and Multimedia Communications (DEMC) in order to keep up with ZigBee modern trend. This paper presents the first results and experiences associated with ZigBee based wireless sensor networking. The accent was put on suitable chipset platform selection for Home Automation wireless network purposes. Four popular microcontrollers was selected to investigate memory requirements and power consumption such as ARM, x51, HCS08, and Coldfire. Next objective was to test interoperability between various manufacturers’ platforms, what is important feature of ZigBee standard. A simple network based on ZigBee physical layer as well as ZigBee compliant network were made to confirm the basic ZigBee interoperability.", "title": "" }, { "docid": "72ac5e1ec4cfdcd2e7b0591adce56091", "text": "Th is paper presents a low cost and flexib le home control and monitoring system using an embedded micro -web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communicat ion protocol to monitor and control the home environment with more than just the switching functionality. To demonstrate the feasibility and effectiveness of this system, devices such as light switches, power p lug, temperature sensor and current sensor have been integrated with the proposed home control system.", "title": "" } ]
[ { "docid": "7575e468e2ee37c9120efb5e73e4308a", "text": "In this demo, we present Cleanix, a prototype system for cleaning relational Big Data. Cleanix takes data integrated from multiple data sources and cleans them on a shared-nothing machine cluster. The backend system is built on-top-of an extensible and flexible data-parallel substrate - the Hyracks framework. Cleanix supports various data cleaning tasks such as abnormal value detection and correction, incomplete data filling, de-duplication, and conflict resolution. We demonstrate that Cleanix is a practical tool that supports effective and efficient data cleaning at the large scale.", "title": "" }, { "docid": "833c110e040311909aa38b05e457b2af", "text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.", "title": "" }, { "docid": "ecd4dd9d8807df6c8194f7b4c7897572", "text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.", "title": "" }, { "docid": "339de1d21bfce2e9a8848d6fbc2792d4", "text": "The extraction of local tempo and beat information from audio recordings constitutes a challenging task, particularly for music that reveals significant tempo variations. Furthermore, the existence of various pulse levels such as measure, tactus, and tatum often makes the determination of absolute tempo problematic. In this paper, we present a robust mid-level representation that encodes local tempo information. Similar to the well-known concept of cyclic chroma features, where pitches differing by octaves are identified, we introduce the concept of cyclic tempograms, where tempi differing by a power of two are identified. Furthermore, we describe how to derive cyclic tempograms from music signals using two different methods for periodicity analysis and finally sketch some applications to tempo-based audio segmentation.", "title": "" }, { "docid": "fdbad1d98044bf6494bfd211e6116db8", "text": "This work addresses the problem of underwater archaeological surveys from the point of view of knowledge. We propose an approach based on underwater photogrammetry guided by a representation of knowledge used, as structured by ontologies. Survey data feed into to ontologies and photogrammetry in order to produce graphical results. This paper focuses on the use of ontologies during the exploitation of 3D results. JAVA software dedicated to photogram‐ metry and archaeological survey has been mapped onto an OWL formalism. The use of procedural attachment in a dual representation (JAVA OWL) of the involved concepts allows us to access computational facilities directly from OWL. As SWRL The use of rules illustrates very well such ‘double formalism’ as well as the use of computational capabilities of ‘rules logical expression’. We present an application that is able to read the ontology populated with a photo‐ grammetric survey data. Once the ontology is read, it is possible to produce a 3D representation of the individuals and observing graphically the results of logical spatial queries on the ontology. This work is done on a very important underwater archaeological site in Malta named Xlendi, probably the most ancient shipwreck of the central Mediterranean Sea.", "title": "" }, { "docid": "912c213d76bed8d90f636ea5a6220cf1", "text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.", "title": "" }, { "docid": "81ddc594cb4b7f3ed05908ce779aa4f4", "text": "Since the length of microblog texts, such as tweets, is strictly limited to 140 characters, traditional Information Retrieval techniques suffer from the vocabulary mismatch problem severely and cannot yield good performance in the context of microblogosphere. To address this critical challenge, in this paper, we propose a new language modeling approach for microblog retrieval by inferring various types of context information. In particular, we expand the query using knowledge terms derived from Freebase so that the expanded one can better reflect users’ search intent. Besides, in order to further satisfy users’ real-time information need, we incorporate temporal evidences into the expansion method, which can boost recent tweets in the retrieval results with respect to a given topic. Experimental results on two official TREC Twitter corpora demonstrate the significant superiority of our approach over baseline methods.", "title": "" }, { "docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5", "text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.", "title": "" }, { "docid": "c2816721fa6ccb0d676f7fdce3b880d4", "text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.", "title": "" }, { "docid": "bcf27c4f750ab74031b8638a9b38fd87", "text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.", "title": "" }, { "docid": "f1a36f7fd6b3cf42415c483f6ade768e", "text": "The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.", "title": "" }, { "docid": "b374975ae9690f96ed750a888713dbc9", "text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.", "title": "" }, { "docid": "5df529aca774edb0eb5ac93c9a0ce3b7", "text": "The GRASP (Graphical Representations of Algorithms, Structures, and Processes) project, which has successfully prototyped a new algorithmic-level graphical representation for software—the control structure diagram (CSD)—is currently focused on the generation of a new fine-grained complexity metric called the complexity profile graph (CPG). The primary impetus for creation and refinement of the CSD and the CPG is to improve the comprehension efficiency of software and, as a result, improve reliability and reduce costs. The current GRASP release provides automatic CSD generation for Ada 95, C, C++, Java, and Very High-Speed Integrated Circuit Hardware Description Language (VHDL) source code, and CPG generation for Ada 95 source code. The examples and discussion in this article are based on using GRASP with Ada 95.", "title": "" }, { "docid": "ef771fa11d9f597f94cee5e64fcf9fd6", "text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.", "title": "" }, { "docid": "7df626465d52dfe5859e682c685c62bc", "text": "This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank …", "title": "" }, { "docid": "becd45d50ead03dd5af399d5618f1ea3", "text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.", "title": "" }, { "docid": "b19ba18dbce648ca584d5c41b406d1be", "text": "Communication experiments using normal lab setup, which includes more hardware and less software raises the cost of the total system. The method proposed here provides a new approach through which all the analog and digital experiments can be performed using a single hardware-USRP (Universal Software Radio Peripheral) and software-GNU Radio Companion (GRC). Initially, networking setup is formulated using SDR technology. Later on, one of the analog communication experiments is demonstrated in real time using the GNU Radio Companion, RTL-SDR and USRP. The entire communication system is less expensive as the system uses a single reprogrammable hardware and most of the focus of the system deals with the software part.", "title": "" }, { "docid": "41de353ad7e48d5f354893c6045394e2", "text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.", "title": "" }, { "docid": "4b886b3ee8774a1e3110c12bdbdcbcdf", "text": "To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.", "title": "" } ]
scidocsrr
c3c305f1b0114c46ec4ca620701ce52b
Organizational change and development.
[ { "docid": "4a536c1186a1d1d1717ec1e0186b262c", "text": "In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, selforganizing, and learning.", "title": "" } ]
[ { "docid": "51c82ab631167a61e553e1ab8e34a385", "text": "The social and political context of sexual identity development in the United States has changed dramatically since the mid twentieth century. Same-sex attracted individuals have long needed to reconcile their desire with policies of exclusion, ranging from explicit outlaws on same-sex activity to exclusion from major social institutions such as marriage. This paper focuses on the implications of political exclusion for the life course of individuals with same-sex desire through the analytic lens of narrative. Using illustrative evidence from a study of autobiographies of gay men spanning a 60-year period and a study of the life stories of contemporary same-sex attracted youth, we detail the implications of historic silence, exclusion, and subordination for the life course.", "title": "" }, { "docid": "a5bd062a1ed914fb2effc924e41a4f73", "text": "With the developments and applications of the new information technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, a smart manufacturing era is coming. At the same time, various national manufacturing development strategies have been put forward, such as Industry 4.0, Industrial Internet, manufacturing based on Cyber-Physical System, and Made in China 2025. However, one of specific challenges to achieve smart manufacturing with these strategies is how to converge the manufacturing physical world and the virtual world, so as to realize a series of smart operations in the manufacturing process, including smart interconnection, smart interaction, smart control and management, etc. In this context, as a basic unit of manufacturing, shop-floor is required to reach the interaction and convergence between physical and virtual spaces, which is not only the imperative demand of smart manufacturing, but also the evolving trend of itself. Accordingly, a novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physical shop-floor, virtual shop-floor, shop-floor service system, and shop-floor digital twin data. What is more, the operation mechanisms and implementing methods for DTS are studied and key technologies as well as challenges ahead are investigated, respectively.", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "c0559cebfad123a67777868990d40c7e", "text": "One of the attractive methods for providing natural human-computer interaction is the use of the hand as an input device rather than the cumbersome devices such as keyboards and mice, which need the user to be located in a specific location to use these devices. Since human hand is an articulated object, it is an open issue to discuss. The most important thing in hand gesture recognition system is the input features, and the selection of good features representation. This paper presents a review study on the hand postures and gesture recognition methods, which is considered to be a challenging problem in the human-computer interaction context and promising as well. Many applications and techniques were discussed here with the explanation of system recognition framework and its main phases.", "title": "" }, { "docid": "e3db1429e8821649f35270609459cb0d", "text": "Novelty detection is the task of recognising events the differ from a model of normality. This paper proposes an acoustic novelty detector based on neural networks trained with an adversarial training strategy. The proposed approach is composed of a feature extraction stage that calculates Log-Mel spectral features from the input signal. Then, an autoencoder network, trained on a corpus of “normal” acoustic signals, is employed to detect whether a segment contains an abnormal event or not. A novelty is detected if the Euclidean distance between the input and the output of the autoencoder exceeds a certain threshold. The innovative contribution of the proposed approach resides in the training procedure of the autoencoder network: instead of using the conventional training procedure that minimises only the Minimum Mean Squared Error loss function, here we adopt an adversarial strategy, where a discriminator network is trained to distinguish between the output of the autoencoder and data sampled from the training corpus. The autoencoder, then, is trained also by using the binary cross-entropy loss calculated at the output of the discriminator network. The performance of the algorithm has been assessed on a corpus derived from the PASCAL CHiME dataset. The results showed that the proposed approach provides a relative performance improvement equal to 0.26% compared to the standard autoencoder. The significance of the improvement has been evaluated with a one-tailed z-test and resulted significant with p < 0.001. The presented approach thus showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.", "title": "" }, { "docid": "6b0e2a151fd9aa53a97884d3f6b34c33", "text": "Building systems that possess the sensitivity and intelligence to identify and describe high-level attributes in music audio signals continues to be an elusive goal but one that surely has broad and deep implications for a wide variety of applications. Hundreds of articles have so far been published toward this goal, and great progress appears to have been made. Some systems produce remarkable accuracies at recognizing high-level semantic concepts, such as music style, genre, and mood. However, it might be that these numbers do not mean what they seem. In this article, we take a state-of-the-art music content analysis system and investigate what causes it to achieve exceptionally high performance in a benchmark music audio dataset. We dissect the system to understand its operation, determine its sensitivities and limitations, and predict the kinds of knowledge it could and could not possess about music. We perform a series of experiments to illuminate what the system has actually learned to do and to what extent it is performing the intended music listening task. Our results demonstrate how the initial manifestation of music intelligence in this state of the art can be deceptive. Our work provides constructive directions toward developing music content analysis systems that can address the music information and creation needs of real-world users.", "title": "" }, { "docid": "69049d1f5a3b14bb00d57d16a93ec47f", "text": "The porphyrias are disorders of haem biosynthesis which present with acute neurovisceral attacks or disorders of sun-exposed skin. Acute attacks occur mainly in adults and comprise severe abdominal pain, nausea, vomiting, autonomic disturbance, central nervous system involvement and peripheral motor neuropathy. Cutaneous porphyrias can be acute or chronic presenting at various ages. Timely diagnosis depends on clinical suspicion leading to referral of appropriate samples for screening by reliable biochemical methods. All samples should be protected from light. Investigation for an acute attack: • Porphobilinogen (PBG) quantitation in a random urine sample collected during symptoms. Urine concentration must be assessed by measuring creatinine, and a repeat requested if urine creatinine <2 mmol/L. • Urgent porphobilinogen testing should be available within 24 h of sample receipt at the local laboratory. Urine porphyrin excretion (TUP) should subsequently be measured on this urine. • Urine porphobilinogen should be measured using a validated quantitative ion-exchange resin-based method or LC-MS. • Increased urine porphobilinogen excretion requires confirmatory testing and clinical advice from the National Acute Porphyria Service. • Identification of individual acute porphyrias requires analysis of urine, plasma and faecal porphyrins. Investigation for cutaneous porphyria: • An EDTA blood sample for plasma porphyrin fluorescence emission spectroscopy and random urine sample for TUP. • Whole blood for porphyrin analysis is essential to identify protoporphyria. • Faeces need only be collected, if first-line tests are positive or if clinical symptoms persist. Investigation for latent porphyria or family history: • Contact a specialist porphyria laboratory for advice. Clinical, family details are usually required.", "title": "" }, { "docid": "296ce1f0dd7bf02c8236fa858bb1957c", "text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.", "title": "" }, { "docid": "617d1d0900ddebb431ae8fe37ad2e23b", "text": "We used cDNA microarrays to assess gene expression profiles in 60 human cancer cell lines used in a drug discovery screen by the National Cancer Institute. Using these data, we linked bioinformatics and chemoinformatics by correlating gene expression and drug activity patterns in the NCI60 lines. Clustering the cell lines on the basis of gene expression yielded relationships very different from those obtained by clustering the cell lines on the basis of their response to drugs. Gene-drug relationships for the clinical agents 5-fluorouracil and L-asparaginase exemplify how variations in the transcript levels of particular genes relate to mechanisms of drug sensitivity and resistance. This is the first study to integrate large databases on gene expression and molecular pharmacology.", "title": "" }, { "docid": "40c4175be1573d9542f6f9f859fafb01", "text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.", "title": "" }, { "docid": "8d197bf27af825b9972a490d3cc9934c", "text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.", "title": "" }, { "docid": "b1ba519ffe5321d9ab92ebed8d9264bb", "text": "OBJECTIVES\nThe purpose of this study was to establish reference charts of fetal biometric parameters measured by 2-dimensional sonography in a large Brazilian population.\n\n\nMETHODS\nA cross-sectional retrospective study was conducted including 31,476 low-risk singleton pregnancies between 18 and 38 weeks' gestation. The following fetal parameters were measured: biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight. To assess the correlation between the fetal biometric parameters and gestational age, polynomial regression models were created, with adjustments made by the determination coefficient (R(2)).\n\n\nRESULTS\nThe means ± SDs of the biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight measurements at 18 and 38 weeks were 4.2 ± 2.34 and 9.1 ± 4.0 cm, 15.3 ± 7.56 and 32.3 ± 11.75 cm, 13.3 ± 10.42 and 33.4 ± 20.06 cm, 2.8 ± 2.17 and 7.2 ± 3.58 cm, and 256.34 ± 34.03 and 3169.55 ± 416.93 g, respectively. Strong correlations were observed between all fetal biometric parameters and gestational age, best represented by second-degree equations, with R(2) values of 0.95, 0.96, 0.95, 0.95, and 0.95 for biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight.\n\n\nCONCLUSIONS\nFetal biometric parameters were determined for a large Brazilian population, and they may serve as reference values in cases with a high risk of intrauterine growth disorders.", "title": "" }, { "docid": "b1eff907bd8b227275f094d57b627ac8", "text": "BACKGROUND\nPilonidal sinus is a chronic inflammatory disorder of the intergluteal sulcus. The disorder often negatively affects patients' quality of life, and there are numerous possible methods of operative treatment for pilonidal sinus. The aim of our study was to compare the results of 3 different operative procedures (tension-free primary closure, Limberg flap, and Karydakis technique) used in the treatment of pilonidal disease.\n\n\nMETHODS\nThe study was conducted via a prospective randomized design. The patients were randomized into 3 groups via a closed envelope method. Patients were included in the study after admission to our clinic with pilonidal sinus disease and operative treatment already were planned. The 2 main outcomes of the study were early complications from the methods used and later recurrences of the disease.\n\n\nRESULTS\nA total of 150 patients were included in the study, and the groups were similar in terms of age, sex, and American Society of Anesthesiologists scores. The median follow-up time of the study was 24.2 months (range, 18.5-34.27) postsurgery. The recurrence rates were 6% for both the Limberg and Karydakis groups and 4% for the tension-free primary closure group. Therefore, there was no substantial difference in the recurrence rates.\n\n\nCONCLUSION\nThe search for an ideal treatment modality for pilonidal sinus disease is still ongoing. The main conclusion of our study is that a tension-free healing side is much more important than a midline suture line. Also, tension-free primary closure is as effective as a flap procedure, and it is also easier to perform.", "title": "" }, { "docid": "d79a1a6398e98855ddd1181c141d7b00", "text": "In this paper we describe a new binarisation method designed specifically for OCR of low quality camera images: Background Surface Thresholding or BST. This method is robust to lighting variations and produces images with very little noise and consistent stroke width. BST computes a ”surface” of background intensities at every point in the image and performs adaptive thresholding based on this result. The surface is estimated by identifying regions of lowresolution text and interpolating neighbouring background intensities into these regions. The final threshold is a combination of this surface and a global offset. According to our evaluation BST produces considerably fewer OCR errors than Niblack’s local average method while also being more runtime efficient.", "title": "" }, { "docid": "3e0d88a135e7d7daff538eea1a6f2c9d", "text": "The first step in an image retrieval pipeline consists of comparing global descriptors from a large database to find a short list of candidate matching images. The more compact the global descriptor, the faster the descriptors can be compared for matching. State-of-the-art global descriptors based on Fisher Vectors are represented with tens of thousands of floating point numbers. While there is significant work on compression of local descriptors, there is relatively little work on compression of high dimensional Fisher Vectors. We study the problem of global descriptor compression in the context of image retrieval, focusing on extremely compact binary representations: 64-1024 bits. Motivated by the remarkable success of deep neural networks in recent literature, we propose a compression scheme based on deeply stacked Restricted Boltzmann Machines (SRBM), which learn lower dimensional non-linear subspaces on which the data lie. We provide a thorough evaluation of several state-of-the-art compression schemes based on PCA, Locality Sensitive Hashing, Product Quantization and greedy bit selection, and show that the proposed compression scheme outperforms all existing schemes.", "title": "" }, { "docid": "7e26a6ccd587ae420b9d2b83f6b54350", "text": "Because of the SARS epidemic in Asia, people chose to the Internet shopping instead of going shopping on streets. In other words, SARS actually gave the Internet an opportunity to revive from its earlier bubbles. The purpose of this research is to provide managers of shopping Websites regarding consumer purchasing decisions based on the CSI (Consumer Styles Inventory) which was proposed by Sproles (1985) and Sproles & Kendall (1986). According to the CSI, one can capture the decision-making styles of online shoppers. Furthermore, this research also discusses the gender differences among online shoppers. Exploratory factor analysis (EFA) was used to understand the decision-making styles and discriminant analysis was used to distinguish the differences between female and male shoppers. Managers of Internet shopping Websites can design a proper marketing mix with the findings that there are differences in purchasing decisions between genders.", "title": "" }, { "docid": "7f49cb5934130fb04c02db03bd40e83d", "text": "BACKGROUND\nResearch literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use.\n\n\nMETHOD\nWe conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology.\n\n\nRESULTS\nMost papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects.\n\n\nLIMITATIONS\nWe only included correlational studies in our systematic review, but address the few relevant experimental studies also.\n\n\nCONCLUSIONS\nWe discuss causal explanations for relationships between problem smartphone use and psychopathology.", "title": "" }, { "docid": "a48278ee8a21a33ff87b66248c6b0b8a", "text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.", "title": "" }, { "docid": "0f8bf207201692ad4905e28a2993ef29", "text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.", "title": "" } ]
scidocsrr
0642923b608cd6d9e2d8f3455cbc443b
Continuous Path Smoothing for Car-Like Robots Using B-Spline Curves
[ { "docid": "38382c04e7dc46f5db7f2383dcae11fb", "text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.", "title": "" } ]
[ { "docid": "a7be4f9177e6790756b7ede4a2d9ca79", "text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.", "title": "" }, { "docid": "f4bc0b7aa15de139ddb09e406fc1ce0b", "text": "This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple “recency” based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular “sweep rehearsal” is very successful at minimising catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, “pseudorehearsal”, a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then suggest an interpretation of these rehearsal mechanisms in the context of a function approximation based account of neural network learning. Both rehearsal and pseudorehearsal may have practical applications, allowing new information to be integrated into an existing network with minimum disruption of old information.", "title": "" }, { "docid": "712636d3a1dfe2650c0568c8f7cf124c", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.", "title": "" }, { "docid": "42b9f909251aeb850a1bfcdf7ec3ace4", "text": "Kidney stones are one of the most common chronic disorders in industrialized countries. In patients with kidney stones, the goal of medical therapy is to prevent the formation of new kidney stones and to reduce growth of existing stones. The evaluation of the patient with kidney stones should identify dietary, environmental, and genetic factors that contribute to stone risk. Radiologic studies are required to identify the stone burden at the time of the initial evaluation and to follow up the patient over time to monitor success of the treatment program. For patients with a single stone an abbreviated laboratory evaluation to identify systemic disorders usually is sufficient. For patients with multiple kidney stones 24-hour urine chemistries need to be measured to identify abnormalities that predispose to kidney stones, which guides dietary and pharmacologic therapy to prevent future stone events.", "title": "" }, { "docid": "52315f23e419ba27e6fd058fe8b7aa9d", "text": "Detected obstacles overlaid on the original image Polar map: The agent is at the center of the map, facing 00. The blue points correspond to polar positions of the obstacle points around the agent. 1. Talukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle SympoTalukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle Symposium, 2002. IEEE. Vol. 2. IEEE, 2002. 2. Sun, Deqing, Stefan Roth, and Michael J. Black. \"Secrets of optical flow estimation and their principles.\" Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. 3. Bernini, Nicola, et al. \"Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey.\" Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014. 4. Broggi, Alberto, et al. \"Stereo obstacle detection in challenging environments: the VIAC experience.\" Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.", "title": "" }, { "docid": "e56accce9d4ae911e85f5fd2b92a614a", "text": "This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.", "title": "" }, { "docid": "ac0875c0f01d32315f4ea63049d3a1e1", "text": "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS. ∗Equal Contribution", "title": "" }, { "docid": "de1f680fd80b20f005dab2ef8067f773", "text": "This paper describes a convolutional neural network based deep learning approach for bird song classification that was used in an audio record-based bird identification challenge, called BirdCLEF 2016. The training and test set contained about 24k and 8.5k recordings, belonging to 999 bird species. The recorded waveforms were very diverse in terms of length and content. We converted the waveforms into frequency domain and splitted into equal segments. The segments were fed into a convolutional neural network for feature learning, which was followed by fully connected layers for classification. In the official scores our solution reached a MAP score of over 40% for main species, and MAP score of over 33% for main species mixed with background species.", "title": "" }, { "docid": "731d9faffc834156d5218a09fbb82e27", "text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.", "title": "" }, { "docid": "a2b9c5f2b6299d0de91d80f9316a02e7", "text": "In this paper, with the help of knowledge base, we build and formulate a semantic space to connect the source and target languages, and apply it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method. In our KBSE method, the source sentence is firstly mapped into a knowledge based semantic space, and the target sentence is generated using a recurrent neural network with the internal meaning preserved. Experiments are conducted on two translation tasks, the electric business data and movie data, and the results show that our proposed method can achieve outstanding performance, compared with both the traditional SMT methods and the existing encoder-decoder models.", "title": "" }, { "docid": "288f831e93e83b86d28624e31bb2f16c", "text": "Deep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we propose a novel energy efficient model Binary Weight and Hadamard-transformed Image Network (BWHIN), which is a combination of Binary Weight Network (BWN) and Hadamard-transformed Image Network (HIN). It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.", "title": "" }, { "docid": "ff4c069ab63ced5979cf6718eec30654", "text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.", "title": "" }, { "docid": "875e12852dabbcabe24cc59b764a4226", "text": "As more and more marketers incorporate social media as an integral part of the promotional mix, rigorous investigation of the determinants that impact consumers’ engagement in eWOM via social networks is becoming critical. Given the social and communal characteristics of social networking sites (SNSs) such as Facebook, MySpace and Friendster, this study examines how social relationship factors relate to eWOM transmitted via online social websites. Specifically, a conceptual model that identifies tie strength, homophily, trust, normative and informational interpersonal influence as an important antecedent to eWOM behaviour in SNSs was developed and tested. The results confirm that tie strength, trust, normative and informational influence are positively associated with users’ overall eWOM behaviour, whereas a negative relationship was found with regard to homophily. This study suggests that product-focused eWOM in SNSs is a unique phenomenon with important social implications. The implications for researchers, practitioners and policy makers of social media regulation are discussed.", "title": "" }, { "docid": "4e2bed31e5406e30ae59981fa8395d5b", "text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.", "title": "" }, { "docid": "6f410e93fa7ab9e9c4a7a5710fea88e2", "text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.", "title": "" }, { "docid": "cd0bd7ac3aead17068c7f223fc19da60", "text": "In this letter, a class of wideband impedance transformers based on multisection quarter-wave transmission lines and short-circuited stubs are proposed to be incorporated with good passband frequency selectivity. A synthesis approach is then presented to design this two-port asymmetrical transformer with Chebyshev frequency response. For the specified impedance transformation ratio, bandwidth, and in-band return loss, the required impedance parameters can be directly determined. Next, a transformer with two section transmission lines in the middle is characterized, where a set of design curves are given for practical design. Theoretically, the proposed multisection transformer has attained good passband frequency selectivity against the reported counterparts. Finally, a 50-110 Ω impedance transformer with a fractional bandwidth of 77.8% and 15 dB in-band return loss is designed, fabricated and measured to verify the prediction.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" }, { "docid": "1e4f13016c846039f7bbed47810b8b3d", "text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.", "title": "" }, { "docid": "7f83aa38f6f715285b757e235da04257", "text": "In recent researches on inverter-based distributed generators, disadvantages of traditional grid-connected current control, such as no grid-forming ability and lack of inertia, have been pointed out. As a result, novel control methods like droop control and virtual synchronous generator (VSG) have been proposed. In both methods, droop characteristics are used to control active and reactive power, and the only difference between them is that VSG has virtual inertia with the emulation of swing equation, whereas droop control has no inertia. In this paper, dynamic characteristics of both control methods are studied, in both stand-alone mode and synchronous-generator-connected mode, to understand the differences caused by swing equation. Small-signal models are built to compare transient responses of frequency during a small loading transition, and state-space models are built to analyze oscillation of output active power. Effects of delays in both controls are also studied, and an inertial droop control method is proposed based on the comparison. The results are verified by simulations and experiments. It is suggested that VSG control and proposed inertial droop control inherits the advantages of droop control, and in addition, provides inertia support for the system.", "title": "" }, { "docid": "5467003778aa2c120c36ac023f0df704", "text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.", "title": "" } ]
scidocsrr
c452c6a4553d343cefe3fd686b2c8692
Analyzing Argumentative Discourse Units in Online Interactions
[ { "docid": "d7a348b092064acf2d6a4fd7d6ef8ee2", "text": "Argumentation theory involves the analysis of naturally occurring argument, and one key tool employed to this end both in the academic community and in teaching critical thinking skills to undergraduates is argument diagramming. By identifying the structure of an argument in terms of its constituents and the relationships between them, it becomes easier to critically evaluate each part of an argument in turn. The task of analysis and diagramming, however, is labor intensive and often idiosyncratic, which can make academic exchange difficult. The Araucaria system provides an interface which supports the diagramming process, and then saves the result using AML, an open standard, designed in XML, for describing argument structure. Araucaria aims to be of use not only in pedagogical situations, but also in support of research activity. As a result, it has been designed from the outset to handle more advanced argumentation theoretic concepts such as schemes, which capture stereotypical patterns of reasoning. The software is also designed to be compatible with a number of applications under development, including dialogic interaction and online corpus provision. Together, these features, combined with its platform independence and ease of use, have the potential to make Araucaria a valuable resource for the academic community.", "title": "" }, { "docid": "5f7adc28fab008d93a968b6a1e5ad061", "text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.", "title": "" } ]
[ { "docid": "a3ad2be5b2b44277026ee9f84c0d416b", "text": "In order to attain a useful balanced scorecard (BSC), appropriate performance perspectives and indicators are crucial to reflect all strategies of the organisation. The objectives of this survey were to give an insight regarding the situation of the BSC in the health sector over the past decade, and to afford a generic approach of the BSC development for health settings with specific focus on performance perspectives, performance indicators and BSC generation. After an extensive search based on publication date and research content, 29 articles published since 2002 were identified, categorised and analysed. Four critical attributes of each article were analysed, including BSC generation, performance perspectives, performance indicators and auxiliary tools. The results showed that 'internal business process' was the most notable BSC perspective as it was included in all reviewed articles. After investigating the literature, it was concluded that its comprehensiveness is the reason for the importance and high usage of this perspective. The findings showed that 12 cases out of 29 reviewed articles (41%) exceeded the maximum number of key performance indicators (KPI) suggested in a previous study. It was found that all 12 cases were large organisations with numerous departments (e.g. national health organisations). Such organisations require numerous KPI to cover all of their strategic objectives. It was recommended to utilise the cascaded BSC within such organisations to avoid complexity and difficulty in gathering, analysing and interpreting performance data. Meanwhile it requires more medical staff to contribute in BSC development, which will result in greater reliability of the BSC.", "title": "" }, { "docid": "7e0b9941d5019927fce0a1223a88d6b5", "text": "Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a \"Challenge Project on Video Event Taxonomy\" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices.", "title": "" }, { "docid": "799ccd75d6781e38cf5e2faee5784cae", "text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "title": "" }, { "docid": "d3f97e0de15ab18296e161e287890e18", "text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.", "title": "" }, { "docid": "3dd8c177ae928f7ccad2aa980bd8c747", "text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.", "title": "" }, { "docid": "03dc797bafa51245791de2b7c663a305", "text": "In many applications of computational geometry to modeling objects and processes in the physical world, the participating objects are in a state of continuous change. Motion is the most ubiquitous kind of continuous transformation but others, such as shape deformation, are also possible. In a recent paper, Baech, Guibas, and Hershberger [BGH97] proposed the framework of kinetic data structures (KDSS) as a way to maintain, in a completely on-line fashion, desirable information about the state of a geometric system in continuous motion or change. They gave examples of kinetic data structures for the maximum of a set of (changing) numbers, and for the convex hull and closest pair of a set of (moving) points in the plane. The KDS frameworkallowseach object to change its motion at will according to interactions with other moving objects, the environment, etc. We implemented the KDSSdescribed in [BGH97],es well as came alternative methods serving the same purpose, as a way to validate the kinetic data structures framework in practice. In this note, we report some preliminary results on the maintenance of the convex hull, describe the experimental setup, compare three alternative methods, discuss the value of the measures of quality for KDSS proposed by [BGH97],and highlight some important numerical issues.", "title": "" }, { "docid": "d8143c0b083defa15182e079b23bdfe8", "text": "OBJECTIVES\nThe purpose of this study was to compare the incidence of genital injury following penile-vaginal penetration with and without consent.\n\n\nDESIGN\nThis study compared observations of genital injuries from two cohorts.\n\n\nSETTING\nParticipants were drawn from St. Mary's Sexual Assault Referral Centre and a general practice surgery in Manchester, and a general practice surgery in Buckinghamshire.\n\n\nPARTICIPANTS\nTwo cohorts were recruited: a retrospective cohort of 500 complainants referred to a specialist Sexual Assault Referral Centre (the Cases) and 68 women recruited at the time of their routine cervical smear test who had recently had sexual intercourse (the Comparison group).\n\n\nMAIN OUTCOME MEASURES\nPresence of genital injuries.\n\n\nRESULTS\n22.8% (n=00, 95% CI 19.2-26.7) of adult complainants of penile-vaginal rape by a single assailant sustained an injury to the genitalia that was visible within 48h of the incident. This was approximately three times more than the 5.9% (n=68, 95% CI 1.6-14.4) of women who sustained a genital injury during consensual sex. This was a statistically significant difference (a<0.05, p=0.0007). Factors such as hormonal status, position during intercourse, criminal justice outcome, relationship to assailant, and the locations, sizes and types of injuries were also considered but the only factor associated with injury was the relationship with the complainant, with an increased risk of injury if the assailant was known to the complainant (p=0.019).\n\n\nCONCLUSIONS\nMost complainants of rape (n=500, 77%, 95% CI 73-81%) will not sustain any genital injury, although women are three times more likely to sustain a genital injury from an assault than consensual intercourse.", "title": "" }, { "docid": "1add7dcbe4f7c666e0453d5fa6661b31", "text": "Convolutive blind source separation (CBSS) that exploits the sparsity of source signals in the frequency domain is addressed in this paper. We assume the sources follow complex Laplacian-like distribution for complex random variable, in which the real part and imaginary part of complex-valued source signals are not necessarily independent. Based on the maximum a posteriori (MAP) criterion, we propose a novel natural gradient method for complex sparse representation. Moreover, a new CBSS method is further developed based on complex sparse representation. The developed CBSS algorithm works in the frequency domain. Here, we assume that the source signals are sufficiently sparse in the frequency domain. If the sources are sufficiently sparse in the frequency domain and the filter length of mixing channels is relatively small and can be estimated, we can even achieve underdetermined CBSS. We illustrate the validity and performance of the proposed learning algorithm by several simulation examples.", "title": "" }, { "docid": "890a2092f3f55799e9c0216dac3d9e2f", "text": "The rise in popularity of permissioned blockchain platforms in recent time is significant. Hyperledger Fabric is one such permissioned blockchain platform and one of the Hyperledger projects hosted by the Linux Foundation. The Fabric comprises various components such as smart-contracts, endorsers, committers, validators, and orderers. As the performance of blockchain platform is a major concern for enterprise applications, in this work, we perform a comprehensive empirical study to characterize the performance of Hyperledger Fabric and identify potential performance bottlenecks to gain a better understanding of the system. We follow a two-phased approach. In the first phase, our goal is to understand the impact of various configuration parameters such as block size, endorsement policy, channels, resource allocation, state database choice on the transaction throughput & latency to provide various guidelines on configuring these parameters. In addition, we also aim to identify performance bottlenecks and hotspots. We observed that (1) endorsement policy verification, (2) sequential policy validation of transactions in a block, and (3) state validation and commit (with CouchDB) were the three major bottlenecks. In the second phase, we focus on optimizing Hyperledger Fabric v1.0 based on our observations. We introduced and studied various simple optimizations such as aggressive caching for endorsement policy verification in the cryptography component (3x improvement in the performance) and parallelizing endorsement policy verification (7x improvement). Further, we enhanced and measured the effect of an existing bulk read/write optimization for CouchDB during state validation & commit phase (2.5x improvement). By combining all three optimizations1, we improved the overall throughput by 16x (i.e., from 140 tps to 2250 tps).", "title": "" }, { "docid": "fe903498e0c3345d7e5ebc8bf3407c2f", "text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.", "title": "" }, { "docid": "de0761b7a43cafe7f30d6f8e518dd031", "text": "The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.", "title": "" }, { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" }, { "docid": "d9950f75380758d0a0f4fd9d6e885dfd", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "4c3d8c30223ef63b54f8c7ba3bd061ed", "text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.", "title": "" }, { "docid": "b5214fd5f8f8849a57d453b47f1d73f0", "text": "The development of Graphical User Interface (GUI) is meant to significantly increase the ease of usability of software applications so that the can be used by users from different backgrounds and knowledge level. Such a development becomes even more important and challenging when the users are those that have limited literacy capabilities. Although the progress of development for standard software interface has increased significantly, similar progress has not been available in interface for illiterate people. To fill this gap, this paper presents our research on developing interface of software application devoted to illiterate people. In particular, the proposed interface was designed for mobile application and combines graphic design and linguistic approaches. With such feature, the developed interface is expected to provide easy to use application for illiterate people.", "title": "" }, { "docid": "6c9d84ced9dd23cdb7542a50f1459fef", "text": "This article outlines a framework for the analysis of economic integration and its relation to the asymmetries of economic and social development. Consciously breaking with state-centric forms of social science, it argues for a research agenda that is more adequate to the exigencies and consequences of globalisation than has traditionally been the case in 'development studies'. Drawing on earlier attempts to analyse the crossborder activities of firms, their spatial configurations and developmental consequences, the article moves beyond these by proposing the framework of the 'global production network' (GPN). It explores the conceptual elements involved in this framework in some detail and then turns to sketch a stylised example of a GPN. The article concludes with a brief indication of the benefits that could be delivered be research informed by GPN analysis.", "title": "" }, { "docid": "98cd53e6bf758a382653cb7252169d22", "text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.", "title": "" }, { "docid": "6927647b1e1f6bf9bcf65db50e9f8d6e", "text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.", "title": "" }, { "docid": "81b5379abf3849e1ae4e233fd4955062", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" } ]
scidocsrr
40304cb4069dcd4e8e12cb2d1d782d2e
Classification using Machine Learning Techniques
[ { "docid": "1c89b9927bd5e81c53a9896cd3122b92", "text": "The whole world is changed rapidly and using the current technologies Internet becomes an essential need for everyone. Web is used in every field. Most of the people use web for a common purpose like online shopping, chatting etc. During an online shopping large number of reviews/opinions are given by the users that reflect whether the product is good or bad. These reviews need to be explored, analyse and organized for better decision making. Opinion Mining is a natural language processing task that deals with finding orientation of opinion in a piece of text with respect to a topic. In this paper a document based opinion mining system is proposed that classify the documents as positive, negative and neutral. Negation is also handled in the proposed system. Experimental results using reviews of movies show the effectiveness of the system.", "title": "" } ]
[ { "docid": "014ff12b51ce9f4399bca09e0dedabed", "text": "The crystallographic preferred orientation (CPO) of olivine produced during dislocation creep is considered to be the primary cause of elastic anisotropy in Earth’s upper mantle and is often used to determine the direction of mantle flow. A fundamental question remains, however, as to whether the alignment of olivine crystals is uniquely produced by dislocation creep. Here we report the development of CPO in iron-free olivine (that is, forsterite) during diffusion creep; the intensity and pattern of CPO depend on temperature and the presence of melt, which control the appearance of crystallographic planes on grain boundaries. Grain boundary sliding on these crystallography-controlled boundaries accommodated by diffusion contributes to grain rotation, resulting in a CPO. We show that strong radial anisotropy is anticipated at temperatures corresponding to depths where melting initiates to depths where strongly anisotropic and low seismic velocities are detected. Conversely, weak anisotropy is anticipated at temperatures corresponding to depths where almost isotropic mantle is found. We propose diffusion creep to be the primary means of mantle flow.", "title": "" }, { "docid": "aa0e52963f4fab6db73df79a16fb40aa", "text": "GENTNER, DEDRE. Metaphor as Structure Mapping: The Relational Shift. CHILD DEVELOPMENT, 1988, 59, 47-59. The goal of this research is to clarify the development of metaphor by using structure-mapping theory to make distinctions among kinds of metaphors. In particular, it is proposed that children can understand metaphors based on shared object attributes before those based on shared relational structure. This predicts (1) early ability to interpret metaphors based on shared attributes, (2 ) a developmental increase in ability to interpret metaphors based on shared relational structure, and (3) a shift from primarily attributional to primarily relational interpretations for metaphors that can be understood in either way. 2 experiments were performed to test these claims. There were 3 kinds of metaphors, varying in whether the shared information forming the basis for the interpretation was attributional, relational, or both. In Experiment 1, children aged 5-6 and 910 and adults produced interpretations of the 3 types of metaphors. The attributionality and relationality of their interpretations were scored by independent judges. In Experiment 2, children aged 45 and 7-8 and adults chose which of 2 interpretations-relational or attributional-of a metaphor they preferred. In both experiments, relational responding increased significantly with age, but attributional responding did not. These results indicate a developmental shift toward a focus on relational structure in metaphor interpretation.", "title": "" }, { "docid": "fb6068d738c7865d07999052750ff6a8", "text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.", "title": "" }, { "docid": "b18f98cfad913ebf3ce1780b666277cb", "text": "Deep convolutional neural network (DCNN) has achieved remarkable performance on object detection and speech recognition in recent years. However, the excellent performance of a DCNN incurs high computational complexity and large memory requirement In this paper, an equal distance nonuniform quantization (ENQ) scheme and a K-means clustering nonuniform quantization (KNQ) scheme are proposed to reduce the required memory storage when low complexity hardware or software implementations are considered. For the VGG-16 and the AlexNet, the proposed nonuniform quantization schemes reduce the number of required memory storage by approximately 50% while achieving almost the same or even better classification accuracy compared to the state-of-the-art quantization method. Compared to the ENQ scheme, the proposed KNQ scheme provides a better tradeoff when higher accuracy is required.", "title": "" }, { "docid": "999f30cbd208bc7d262de954d29dcd39", "text": "Purpose\nThe purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders.\n\n\nMethods\n72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values ​​of the index were verified.\n\n\nResults\nThe group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94).\n\n\nConclusion\nGiven the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.", "title": "" }, { "docid": "d2f929806163b2be07c57f0b34fdb3da", "text": "This article reviews the use of robotic technology for otolaryngologic surgery. The authors discuss the development of the technology and its current uses in the operating room. They address procedures such as oropharyngeal transoral robotic surgery (TORS), laryngeal TORS, and thyroidectomy, and also note the role of robotics in teaching.", "title": "" }, { "docid": "7f83946dd7d9869aa49bed57107c2870", "text": "A study of wireless technologies for IoT applications in terms of power consumption has been presented in this paper. The study focuses on the importance of using low power wireless techniques and modules in IoT applications by introducing a comparative between different low power wireless communication techniques such as ZigBee, Low Power Wi-Fi, 6LowPAN, LPWA and their modules to conserve power and longing the life for the IoT network sensors. The approach of the study is in term of protocol used and the particular module that achieve that protocol. The candidate protocols are classified according to the range of connectivity between sensor nodes. For short ranges connectivity the candidate protocols are ZigBee, 6LoWPAN and low power Wi-Fi. For long connectivity the candidate is LoRaWAN protocol. The results of the study demonstrate that the choice of module for each protocol plays a vital role in battery life due to the difference of power consumption for each module/protocol. So, the evaluation of protocols with each other depends on the module used.", "title": "" }, { "docid": "342e7faa2f5b71b9bde287f05f6118c7", "text": "Skyline queries have wide-ranging applications in fields that involve multi-criteria decision making, including tourism, retail industry, and human resources. By automatically removing incompetent candidates, skyline queries allow users to focus on a subset of superior data items (i.e., the skyline), thus reducing the decision-making overhead. However, users are still required to interpret and compare these superior items manually before making a successful choice. This task is challenging because of two issues. First, people usually have fuzzy, unstable, and inconsistent preferences when presented with multiple candidates. Second, skyline queries do not reveal the reasons for the superiority of certain skyline points in a multi-dimensional space. To address these issues, we propose SkyLens, a visual analytic system aiming at revealing the superiority of skyline points from different perspectives and at different scales to aid users in their decision making. Two scenarios demonstrate the usefulness of SkyLens on two datasets with a dozen of attributes. A qualitative study is also conducted to show that users can efficiently accomplish skyline understanding and comparison tasks with SkyLens.", "title": "" }, { "docid": "6a3bb84e7b8486692611aaa790609099", "text": "As ubiquitous commerce using IT convergence technologies is coming, it is important for the strategy of cosmetic sales to investigate the sensibility and the degree of preference in the environment for which the makeup style has changed focusing on being consumer centric. The users caused the diversification of the facial makeup styles, because they seek makeup and individuality to satisfy their needs. In this paper, we proposed the effect of the facial makeup style recommendation on visual sensibility. Development of the facial makeup style recommendation system used a user interface, sensibility analysis, weather forecast, and collaborative filtering for the facial makeup styles to satisfy the user’s needs in the cosmetic industry. Collaborative filtering was adopted to recommend facial makeup style of interest for users based on the predictive relationship discovered between the current user and other previous users. We used makeup styles in the survey questionnaire. The pictures of makeup style details, such as foundation, color lens, eye shadow, blusher, eyelash, lipstick, hairstyle, hairpin, necklace, earring, and hair length were evaluated in terms of sensibility. The data were analyzed by SPSS using ANOVA and factor analysis to discover the most effective types of details from the consumer’s sensibility viewpoint. Sensibility was composed of three concepts: contemporary, mature, and individual. The details of facial makeup styles were positioned in 3D-concept space to relate each type of detail to the makeup concept regarding a woman’s cosmetics. Ultimately, this paper suggests empirical applications to verify the adequacy and the validity of this system.", "title": "" }, { "docid": "b99207292a098761d1bb5cc220cf0790", "text": "Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.", "title": "" }, { "docid": "efe8cf69a4666151603393032af22d8a", "text": "In this paper we present and discuss the findings of a study that investigated how people manage their collections of digital photographs. The six-month, 13-participant study included interviews, questionnaires, and analysis of usage statistics gathered from an instrumented digital photograph management tool called Shoebox. Alongside simple browsing features such as folders, thumbnails and timelines, Shoebox has some advanced multimedia features: content-based image retrieval and speech recognition applied to voice annotations. Our results suggest that participants found their digital photos much easier to manage than their non-digital ones, but that this advantage was almost entirely due to the simple browsing features. The advanced features were not used very often and their perceived utility was low. These results should help to inform the design of improved tools for managing personal digital photographs.", "title": "" }, { "docid": "b70f852bb89e67decf07554a02ee977a", "text": "The advances in information technology have witnessed great progress on healthcare technologies in various domains nowadays. However, these new technologies have also made healthcare data not only much bigger but also much more difficult to handle and process. Moreover, because the data are created from a variety of devices within a short time span, the characteristics of these data are that they are stored in different formats and created quickly, which can, to a large extent, be regarded as a big data problem. To provide a more convenient service and environment of healthcare, this paper proposes a cyber-physical system for patient-centric healthcare applications and services, called Health-CPS, built on cloud and big data analytics technologies. This system consists of a data collection layer with a unified standard, a data management layer for distributed storage and parallel computing, and a data-oriented service layer. The results of this study show that the technologies of cloud and big data can be used to enhance the performance of the healthcare system so that humans can then enjoy various smart healthcare applications and services.", "title": "" }, { "docid": "42440fb81f45c470d591c3bc57e7875b", "text": "We develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by decomposing multiclass problems into multiple binary problems and then use Co-Training to learn the individual binary classification problems. We show that our method is especially useful for classification tasks involving a large number of categories where Co-training doesn’t perform very well by itself and when combined with ECOC, outperforms several other algorithms that combine labeled and unlabeled data for text classification in terms of accuracy, precision-recall tradeoff, and efficiency.", "title": "" }, { "docid": "61d506905286fc3297622d1ac39534f0", "text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.", "title": "" }, { "docid": "8a128a099087c3dee5bbca7b2a8d8dc4", "text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.", "title": "" }, { "docid": "c0b22c68ee02c2adffa7fa9cdfd15812", "text": "In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.", "title": "" }, { "docid": "ebb941fe8b0807a4dcfe02ff898cf99f", "text": "Using “Analyze Results” at the Web of Science, one can directly generate overlays onto global journal maps of science. The maps are based on the 10,000+ journals contained in the Journal Citation Reports (JCR) of the Science and Social Science Citation Indices (2011). The disciplinary diversity of the retrieval is measured in terms of Rao-Stirling’s “quadratic entropy.” Since this indicator of interdisciplinarity is normalized between zero and one, the interdisciplinarity can be compared among document sets and across years, cited or citing. The colors used for the overlays are based on Blondel et al.’s (2008) community-finding algorithms operating on the relations journals included in JCRs. The results can be exported from VOSViewer with different options such as proportional labels, heat maps, or cluster density maps. The maps can also be web-started and/or animated (e.g., using PowerPoint). The “citing” dimension of the aggregated journal-journal citation matrix was found to provide a more comprehensive description than the matrix based on the cited archive. The relations between local and global maps and their different functions in studying the sciences in terms of journal literatures are further discussed: local and global maps are based on different assumptions and can be expected to serve different purposes for the explanation.", "title": "" }, { "docid": "ed7826f37cf45f56ba6e7abf98c509e7", "text": "The progressive ability of a six-strains L. monocytogenes cocktail to form biofilm on stainless steel (SS), under fish-processing simulated conditions, was investigated, together with the biocide tolerance of the developed sessile communities. To do this, the pathogenic bacteria were left to form biofilms on SS coupons incubated at 15°C, for up to 240h, in periodically renewable model fish juice substrate, prepared by aquatic extraction of sea bream flesh, under both mono-species and mixed-culture conditions. In the latter case, L. monocytogenes cells were left to produce biofilms together with either a five-strains cocktail of four Pseudomonas species (fragi, savastanoi, putida and fluorescens), or whole fish indigenous microflora. The biofilm populations of L. monocytogenes, Pseudomonas spp., Enterobacteriaceae, H2S producing and aerobic plate count (APC) bacteria, both before and after disinfection, were enumerated by selective agar plating, following their removal from surfaces through bead vortexing. Scanning electron microscopy was also applied to monitor biofilm formation dynamics and anti-biofilm biocidal actions. Results revealed the clear dominance of Pseudomonas spp. bacteria in all the mixed-culture sessile communities throughout the whole incubation period, with the in parallel sole presence of L. monocytogenes cells to further increase (ca. 10-fold) their sessile growth. With respect to L. monocytogenes and under mono-species conditions, its maximum biofilm population (ca. 6logCFU/cm2) was reached at 192h of incubation, whereas when solely Pseudomonas spp. cells were also present, its biofilm formation was either slightly hindered or favored, depending on the incubation day. However, when all the fish indigenous microflora was present, biofilm formation by the pathogen was greatly hampered and never exceeded 3logCFU/cm2, while under the same conditions, APC biofilm counts had already surpassed 7logCFU/cm2 by the end of the first 96h of incubation. All here tested disinfection treatments, composed of two common food industry biocides gradually applied for 15 to 30min, were insufficient against L. monocytogenes mono-species biofilm communities, with the resistance of the latter to significantly increase from the 3rd to 7th day of incubation. However, all these treatments resulted in no detectable L. monocytogenes cells upon their application against the mixed-culture sessile communities also containing the fish indigenous microflora, something probably associated with the low attached population level of these pathogenic cells before disinfection (<102CFU/cm2) under such mixed-culture conditions. Taken together, all these results expand our knowledge on both the population dynamics and resistance of L. monocytogenes biofilm cells under conditions resembling those encountered within the seafood industry and should be considered upon designing and applying effective anti-biofilm strategies.", "title": "" }, { "docid": "16488fc65794a318e06777189edc3e4b", "text": "This work details Sighthoundś fully automated license plate detection and recognition system. The core technology of the system is built using a sequence of deep Convolutional Neural Networks (CNNs) interlaced with accurate and efficient algorithms. The CNNs are trained and fine-tuned so that they are robust under different conditions (e.g. variations in pose, lighting, occlusion, etc.) and can work across a variety of license plate templates (e.g. sizes, backgrounds, fonts, etc). For quantitative analysis, we show that our system outperforms the leading license plate detection and recognition technology i.e. ALPR on several benchmarks. Our system is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud", "title": "" } ]
scidocsrr
948ced35e7164c1092d9069e0b3efa85
Life cycle assessment of building materials : Comparative analysis of energy and environmental impacts and evaluation of the eco-ef fi ciency improvement potential
[ { "docid": "85d4ac147a4517092b9f81f89af8b875", "text": "This article is an update of an article five of us published in 1992. The areas of Multiple Criteria Decision Making (MCDM) and Multiattribute Utility Theory (MAUT) continue to be active areas of management science research and application. This paper extends the history of these areas and discusses topics we believe to be important for the future of these fields. as well as two anonymous reviewers for valuable comments.", "title": "" } ]
[ { "docid": "4cb49a91b5a30909c99138a8e36badcd", "text": "The main goal of Business Process Management (BPM) is conceptualising, operationalizing and controlling workflows in organisations based on process models. In this paper we discuss several limitations of the workflow paradigm and suggest that process models can also play an important role in analysing how organisations think about themselves through storytelling. We contrast the workflow paradigm with storytelling through a comparative analysis. We also report a case study where storytelling has been used to elicit and document the practices of an IT maintenance team. This research contributes towards the development of better process modelling languages and tools.", "title": "" }, { "docid": "ae3e9bf485d4945af625fca31eaedb76", "text": "This document describes concisely the ubiquitous class of exponential family distributions met in statistics. The first part recalls definitions and summarizes main properties and duality with Bregman divergences (all proofs are skipped). The second part lists decompositions and related formula of common exponential family distributions. We recall the Fisher-Rao-Riemannian geometries and the dual affine connection information geometries of statistical manifolds. It is intended to maintain and update this document and catalog by adding new distribution items. See the jMEF library, a Java package for processing mixture of exponential families. Available for download at http://www.lix.polytechnique.fr/~nielsen/MEF/ École Polytechnique (France) and Sony Computer Science Laboratories Inc. (Japan). École Polytechnique (France).", "title": "" }, { "docid": "c6e0843498747096ebdafd51d4b5cca6", "text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.", "title": "" }, { "docid": "bde4436370b1d5e1423d1b9c710a47ad", "text": "This paper provides a review of the literature addressing sensorless operation methods of PM brushless machines. The methods explained are state-of-the-art of open and closed loop control strategies. The closed loop review includes those methods based on voltage and current measurements, those methods based on back emf measurements, and those methods based on novel techniques not included in the previous categories. The paper concludes with a comparison table including all main features for all control strategies", "title": "" }, { "docid": "525a819d97e84862d4190b1e0aa4acc0", "text": "HELIOS2014 is a 2D soccer simulation team which has been participating in the RoboCup competition since 2000. We recently focus on an online multiagent planning using tree search methodology. This paper describes the overview of our search framework and an evaluation method to select the best action sequence.", "title": "" }, { "docid": "71e6994bf56ed193a3a04728c7022a45", "text": "To evaluate timing and duration differences in airway protection and esophageal opening after oral intubation and mechanical ventilation for acute respiratory distress syndrome (ARDS) survivors versus age-matched healthy volunteers. Orally intubated adult (≥ 18 years old) patients receiving mechanical ventilation for ARDS were evaluated for swallowing impairments via a videofluoroscopic swallow study (VFSS) during usual care. Exclusion criteria were tracheostomy, neurological impairment, and head and neck cancer. Previously recruited healthy volunteers (n = 56) served as age-matched controls. All subjects were evaluated using 5-ml thin liquid barium boluses. VFSS recordings were reviewed frame-by-frame for the onsets of 9 pharyngeal and laryngeal events during swallowing. Eleven patients met inclusion criteria, with a median (interquartile range [IQR]) intubation duration of 14 (9, 16) days, and VFSSs completed a median of 5 (4, 13) days post-extubation. After arrival of the bolus in the pharynx, ARDS patients achieved maximum laryngeal closure a median (IQR) of 184 (158, 351) ms later than age-matched, healthy volunteers (p < 0.001) and it took longer to achieve laryngeal closure with a median (IQR) difference of 151 (103, 217) ms (p < 0.001), although there was no significant difference in duration of laryngeal closure. Pharyngoesophageal segment opening was a median (IQR) of − 116 (− 183, 1) ms (p = 0.004) shorter than in age-matched, healthy controls. Evaluation of swallowing physiology after oral endotracheal intubation in ARDS patients demonstrates slowed pharyngeal and laryngeal swallowing timing, suggesting swallow-related muscle weakness. These findings may highlight specific areas for further evaluation and potential therapeutic intervention to reduce post-extubation aspiration.", "title": "" }, { "docid": "9fba167ef82aa8c153986ea498683ff6", "text": "Purpose – The purpose of this conceptual paper is to identify important elements of brand building based on a literature review and case studies of successful brands in India. Design/methodology/approach – This paper is based on a review of the literature and takes a case study approach. The paper suggests the framework for building brand identity in sequential order, namely, positioning the brand, communicating the brand message, delivering the brand performance, and leveraging the brand equity. Findings – Brand-building effort has to be aligned with organizational processes that help deliver the promises to customers through all company departments, intermediaries, suppliers, etc., as all these play an important role in the experience customers have with the brand. Originality/value – The paper uses case studies of leading Indian brands to illustrate the importance of action elements in building brands in competitive markets.", "title": "" }, { "docid": "80ee585d49685a24a2011a1ddc27bb55", "text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.", "title": "" }, { "docid": "37af8daa32affcdedb0b4820651a0b62", "text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.", "title": "" }, { "docid": "833ec45dfe660377eb7367e179070322", "text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.", "title": "" }, { "docid": "10e6b505ba74b1c8aea1417a4eb36c30", "text": "This meta-analysis summarizes teaching effectiveness studies of the past decade and investigates the role of theory and research design in disentangling results. Compared to past analyses based on the process–product model, a framework based on cognitive models of teaching and learning proved useful in analyzing studies and accounting for variations in effect sizes. Although the effects of teaching on student learning were diverse and complex, they were fairly systematic. The authors found the largest effects for domainspecific components of teaching—teaching most proximal to executive processes of learning. By taking into account research design, the authors further disentangled meta-analytic findings. For example, domain-specific teaching components were mainly studied with quasi-experimental or experimental designs. Finally, correlational survey studies dominated teaching effectiveness studies in the past decade but proved to be more distal from the teaching–learning process.", "title": "" }, { "docid": "9a38b18bd69d17604b6e05b9da450c2d", "text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing &quot;Big Data&quot; and today&apos;s activities on big data tools and techniques.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "a36e43f03735d7610677465bd78e9b6f", "text": "Existing Poisson mesh editing techniques mainly focus on designing schemes to propagate deformation from a given boundary condition to a region of interest. Although solving the Poisson system in the least-squares sense distributes the distortion errors over the entire region of interest, large deformation in the boundary condition might still lead to severely distorted results. We propose to optimize the boundary condition (the merging boundary) for Poisson mesh merging. The user needs only to casually mark a source region and a target region. Our algorithm automatically searches for an optimal boundary condition within the marked regions such that the change of the found boundary during merging is minimal in terms of similarity transformation. Experimental results demonstrate that our merging tool is easy to use and produces visually better merging results than unoptimized techniques.", "title": "" }, { "docid": "3c848d254ae907a75dcbf502ed94aa84", "text": "We study the problem of computing routes for electric vehicles (EVs) in road networks. Since their battery capacity is limited, and consumed energy per distance increases with velocity, driving the fastest route is often not desirable and may even be infeasible. On the other hand, the energy-optimal route may be too conservative in that it contains unnecessary detours or simply takes too long. In this work, we propose to use multicriteria optimization to obtain Pareto sets of routes that trade energy consumption for speed. In particular, we exploit the fact that the same road segment can be driven at different speeds within reasonable intervals. As a result, we are able to provide routes with low energy consumption that still follow major roads, such as freeways. Unfortunately, the size of the resulting Pareto sets can be too large to be practical. We therefore also propose several nontrivial techniques that can be applied on-line at query time in order to speed up computation and filter insignificant solutions from the Pareto sets. Our extensive experimental study, which uses a real-world energy consumption model, reveals that we are able to compute diverse sets of alternative routes on continental networks that closely resemble the exact Pareto set in just under a second—several orders of magnitude faster than the exhaustive algorithm. 1998 ACM Subject Classification G.2.2 Graph Theory, G.2.3 Applications", "title": "" }, { "docid": "abb45e408cb37a0ad89f0b810b7f583b", "text": "In a mobile computing environment, a user carrying a portable computer can execute a mobile t11m,,· action by submitting the ope.rations of the transaction to distributed data servers from different locations. M a result of this mobility, the operations of the transaction may be executed at different servers. The distribution oC operations implies that the transmission of messages (such as those involved in a two phase commit protocol) may be required among these data servers in order to coordinate the execution ofthese operations. In this paper, we will address the distribution oC operations that update partitioned data in mobile environments. We show that, for operations pertaining to resource allocation, the message overhead (e.g., for a 2PC protocol) introduced by the distribution of operations is undesirable and unnecessary. We introduce a new algorithm, the RenlnJation Algorithm (RA), that does not necessitate the incurring of message overheads Cor the commitment of mobile transactions. We address two issues related to the RA algorithm: a termination protocol and a protocol for non_partition.commutotive operation\". We perform a comparison between the proposed RA algorithm and existing solutions that use a 2PC protocol.", "title": "" }, { "docid": "ed3b8bfdd6048e4a07ee988f1e35fd21", "text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±  std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.", "title": "" }, { "docid": "ad7a5bccf168ac3b13e13ccf12a94f7d", "text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.", "title": "" }, { "docid": "c86aad62e950d7c10f93699d421492d5", "text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.", "title": "" } ]
scidocsrr
7dc33ca0df883f80793682ba14baff7a
Three-level neutral-point-clamped inverters in transformerless PV systems — State of the art
[ { "docid": "a0e7cdeefc33d4078702e5368dd9f5b9", "text": "This paper presents a single-phase five-level photovoltaic (PV) inverter topology for grid-connected PV systems with a novel pulsewidth-modulated (PWM) control scheme. Two reference signals identical to each other with an offset equivalent to the amplitude of the triangular carrier signal were used to generate PWM signals for the switches. A digital proportional-integral current control algorithm is implemented in DSP TMS320F2812 to keep the current injected into the grid sinusoidal and to have high dynamic performance with rapidly changing atmospheric conditions. The inverter offers much less total harmonic distortion and can operate at near-unity power factor. The proposed system is verified through simulation and is implemented in a prototype, and the experimental results are compared with that with the conventional single-phase three-level grid-connected PWM inverter.", "title": "" } ]
[ { "docid": "2220633d6343df0ebb2d292358ce182b", "text": "This paper presents a system for fully automatic recognition and reconstruction of 3D objects in image databases. We pose the object recognition problem as one of finding consistent matches between all images, subject to the constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image, we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all images, and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. Our results demonstrate that it is possible to recognise and reconstruct 3D objects from an unordered image database with no user input at all.", "title": "" }, { "docid": "752e6d6f34ffc638e9a0d984a62db184", "text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.", "title": "" }, { "docid": "667a457dcb1f379abd4e355e429dc40d", "text": "BACKGROUND\nViolent death is a serious problem in the United States. Previous research showing US rates of violent death compared with other high-income countries used data that are more than a decade old.\n\n\nMETHODS\nWe examined 2010 mortality data obtained from the World Health Organization for populous, high-income countries (n = 23). Death rates per 100,000 population were calculated for each country and for the aggregation of all non-US countries overall and by age and sex. Tests of significance were performed using Poisson and negative binomial regressions.\n\n\nRESULTS\nUS homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher. For 15- to 24-year-olds, the gun homicide rate in the United States was 49.0 times higher. Firearm-related suicide rates were 8.0 times higher in the United States, but the overall suicide rates were average. Unintentional firearm deaths were 6.2 times higher in the United States. The overall firearm death rate in the United States from all causes was 10.0 times higher. Ninety percent of women, 91% of children aged 0 to 14 years, 92% of youth aged 15 to 24 years, and 82% of all people killed by firearms were from the United States.\n\n\nCONCLUSIONS\nThe United States has an enormous firearm problem compared with other high-income countries, with higher rates of homicide and firearm-related suicide. Compared with 2003 estimates, the US firearm death rate remains unchanged while firearm death rates in other countries decreased. Thus, the already high relative rates of firearm homicide, firearm suicide, and unintentional firearm death in the United States compared with other high-income countries increased between 2003 and 2010.", "title": "" }, { "docid": "b42e92aba32ff037362ecc40b816d063", "text": "In this paper we discuss security issues for cloud computing including storage security, data security, and network security and secure virtualization. Then we select some topics and describe them in more detail. In particular, we discuss a scheme for secure third party publications of documents in a cloud. Next we discuss secure federated query processing with map Reduce and Hadoop. Next we discuss the use of secure coprocessors for cloud computing. Third we discuss XACML implementation for Hadoop. We believe that building trusted applications from untrusted components will be a major aspect of secure cloud computing.", "title": "" }, { "docid": "2ecd815af00b9961259fa9b2a9185483", "text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.", "title": "" }, { "docid": "5343db8a8bc5e300b9ad488d0eda56d4", "text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to differences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second, two ambiguous elements are present, each of which functions both as a connector and a disjunctor.", "title": "" }, { "docid": "9cb13d599da25991d11d276aaa76a005", "text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.", "title": "" }, { "docid": "3a852aa880c564a85cc8741ce7427ced", "text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "c6029c95b8a6b2c6dfb688ac049427dc", "text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.", "title": "" }, { "docid": "17d1439650efccf83390834ba933db1a", "text": "The arterial vascularization of the pineal gland (PG) remains a debatable subject. This study aims to provide detailed information about the arterial vascularization of the PG. Thirty adult human brains were obtained from routine autopsies. Cerebral arteries were separately cannulated and injected with colored latex. The dissections were carried out using a surgical microscope. The diameters of the branches supplying the PG at their origin and vascularization areas of the branches of the arteries were investigated. The main artery of the PG was the lateral pineal artery, and it originated from the posterior circulation. The other arteries included the medial pineal artery from the posterior circulation and the rostral pineal artery mainly from the anterior circulation. Posteromedial choroidal artery was an important artery that branched to the PG. The arterial supply to the PG was studied comprehensively considering the debate and inadequacy of previously published studies on this issue available in the literature. This anatomical knowledge may be helpful for surgical treatment of pathologies of the PG, especially in children who develop more pathology in this region than adults.", "title": "" }, { "docid": "1ddfbf702c35a689367cd2b27dc1c6c6", "text": "In this paper, we propose a simple but powerful prior, color attenuation prior, for haze removal from a single input hazy image. By creating a linear model for modelling the scene depth of the hazy image under this novel prior and learning the parameters of the model by using a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily remove haze from a single image. Experimental results show that the proposed approach is highly efficient and it outperforms state-of-the-art haze removal algorithms in terms of the dehazing effect as well.", "title": "" }, { "docid": "333fd7802029f38bda35cd2077e7de59", "text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.", "title": "" }, { "docid": "3bd2bfd1c7652f8655d009c085d6ed5c", "text": "The past decade has witnessed the boom of human-machine interactions, particularly via dialog systems. In this paper, we study the task of response generation in open-domain multi-turn dialog systems. Many research efforts have been dedicated to building intelligent dialog systems, yet few shed light on deepening or widening the chatting topics in a conversational session, which would attract users to talk more. To this end, this paper presents a novel deep scheme consisting of three channels, namely global, wide, and deep ones. The global channel encodes the complete historical information within the given context, the wide one employs an attention-based recurrent neural network model to predict the keywords that may not appear in the historical context, and the deep one trains a Multi-layer Perceptron model to select some keywords for an in-depth discussion. Thereafter, our scheme integrates the outputs of these three channels to generate desired responses. To justify our model, we conducted extensive experiments to compare our model with several state-of-the-art baselines on two datasets: one is constructed by ourselves and the other is a public benchmark dataset. Experimental results demonstrate that our model yields promising performance by widening or deepening the topics of interest.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "17676785398d4ed24cc04cb3363a7596", "text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.", "title": "" }, { "docid": "4b74b9d4c4b38082f9f667e363f093b2", "text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.", "title": "" }, { "docid": "b885526ab7db7d7ed502698758117c80", "text": "Cancer, more than any other human disease, now has a surfeit of potential molecular targets poised for therapeutic exploitation. Currently, a number of attractive and validated cancer targets remain outside of the reach of pharmacological regulation. Some have been described as undruggable, at least by traditional strategies. In this article, we outline the basis for the undruggable moniker, propose a reclassification of these targets as undrugged, and highlight three general classes of this imposing group as exemplars with some attendant strategies currently being explored to reclassify them. Expanding the spectrum of disease-relevant targets to pharmacological manipulation is central to reducing cancer morbidity and mortality.", "title": "" }, { "docid": "ec0733962301d6024da773ad9d0f636d", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "21c7cbcf02141c60443f912ae5f1208b", "text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).", "title": "" } ]
scidocsrr
58a83c37bf4e499e68fdc64b63f2f55c
Online travel reviews as persuasive communication : The effects of content type , source , and certi fi cation logos on consumer behavior
[ { "docid": "032f5b66ae4ede7e26a911c9d4885b98", "text": "Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust. © 2007 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "99ea14010fe3acd37952fb355a25b71c", "text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.", "title": "" }, { "docid": "3332bf8d62c1176b8f5f0aa2bb045d24", "text": "BACKGROUND\nInfectious mononucleosis caused by the Epstein-Barr virus has been associated with increased risk of multiple sclerosis. However, little is known about the characteristics of this association.\n\n\nOBJECTIVE\nTo assess the significance of sex, age at and time since infectious mononucleosis, and attained age to the risk of developing multiple sclerosis after infectious mononucleosis.\n\n\nDESIGN\nCohort study using persons tested serologically for infectious mononucleosis at Statens Serum Institut, the Danish Civil Registration System, the Danish National Hospital Discharge Register, and the Danish Multiple Sclerosis Registry.\n\n\nSETTING\nStatens Serum Institut.\n\n\nPATIENTS\nA cohort of 25 234 Danish patients with mononucleosis was followed up for the occurrence of multiple sclerosis beginning on April 1, 1968, or January 1 of the year after the diagnosis of mononucleosis or after a negative Paul-Bunnell test result, respectively, whichever came later and ending on the date of multiple sclerosis diagnosis, death, emigration, or December 31, 1996, whichever came first.\n\n\nMAIN OUTCOME MEASURE\nThe ratio of observed to expected multiple sclerosis cases in the cohort (standardized incidence ratio).\n\n\nRESULTS\nA total of 104 cases of multiple sclerosis were observed during 556,703 person-years of follow-up, corresponding to a standardized incidence ratio of 2.27 (95% confidence interval, 1.87-2.75). The risk of multiple sclerosis was persistently increased for more than 30 years after infectious mononucleosis and uniformly distributed across all investigated strata of sex and age. The relative risk of multiple sclerosis did not vary by presumed severity of infectious mononucleosis.\n\n\nCONCLUSIONS\nThe risk of multiple sclerosis is increased in persons with prior infectious mononucleosis, regardless of sex, age, and time since infectious mononucleosis or severity of infection. The risk of multiple sclerosis may be increased soon after infectious mononucleosis and persists for at least 30 years after the infection.", "title": "" }, { "docid": "9609d87c2e75b452495e7fb779a94027", "text": "Cyclophosphamide (CYC) has been the backbone immunosuppressive drug to achieve sustained remission in lupus nephritis (LN). The aim was to evaluate the efficacy and compare adverse effects of low and high dose intravenous CYC therapy in Indian patients with proliferative lupus nephritis. An open-label, parallel group, randomized controlled trial involving 75 patients with class III/IV LN was conducted after obtaining informed consent. The low dose group (n = 38) received 6 × 500 mg CYC fortnightly and high dose group (n = 37) received 6 × 750 mg/m2 CYC four-weekly followed by azathioprine. The primary outcome was complete/partial/no response at 52 weeks. The secondary outcomes were renal and non-renal flares and adverse events. Intention-to-treat analyses were performed. At 52 weeks, 27 (73%) in high dose group achieved complete/partial response (CR/PR) vs 19 (50%) in low dose (p = 0.04). CR was higher in the high dose vs low dose [24 (65%) vs 17 (44%)], although not statistically significant. Non-responders (NR) in the high dose group were also significantly lower 10 (27%) vs low dose 19 (50%) (p = 0.04). The change in the SLEDAI (Median, IQR) was also higher in the high dose 16 (7–20) in contrast to the low dose 10 (5.5–14) (p = 0.04). There was significant alopecia and CYC-induced leucopenia in high dose group. Renal relapses were significantly higher in the low dose group vs high dose [9 (24%) vs 1(3%), (p = 0.01)]. At 52 weeks, high dose CYC was more effective in inducing remission with decreased renal relapses in our population. Trial Registration: The study was registered at http://www.clintrials.gov. NCT02645565.", "title": "" }, { "docid": "a18e6f80284a96f680fb00cb3f0cc692", "text": "We demonstrate an 8-layer 3D Vertical Gate NAND Flash with WL half pitch =37.5nm, BL half pitch=75nm, 64-WL NAND string with 63% array core efficiency. This is the first time that a 3D NAND Flash can be successfully scaled to below 3Xnm half pitch in one lateral dimension, thus an 8-layer stack device already provides a very cost effective technology with lower cost than the conventional sub-20nm 2D NAND. Our new VG architecture has two key features: (1) To improve the manufacturability a new layout that twists the even/odd BL's (and pages) in the opposite direction (split-page BL) is adopted. This allows the island-gate SSL devices [1] and metal interconnections be laid out in double pitch, creating much larger process window for BL pitch scaling; (2) A novel staircase BL contact formation method using binary sum of only M lithography and etching steps to achieve 2M contacts. This not only allows precise landing of the tight-pitch staircase contacts, but also minimizes the process steps and cost. We have successfully fabricated an 8-layer array using TFT BE-SONOS charge-trapping device. The array characteristics including reading, programming, inhibit, and block erase are demonstrated.", "title": "" }, { "docid": "c1713b817c4b2ce6e134b6e0510a961f", "text": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction.", "title": "" }, { "docid": "64bd2fc0d1b41574046340833144dabe", "text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.", "title": "" }, { "docid": "8318d49318f442749bfe3a33a3394f42", "text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.", "title": "" }, { "docid": "a11ed66e5368060be9585022db65c2ad", "text": "This article provides a historical context of evolutionary psychology and feminism, and evaluates the contributions to this special issue of Sex Roles within that context. We briefly outline the basic tenets of evolutionary psychology and articulate its meta-theory of the origins of gender similarities and differences. The article then evaluates the specific contributions: Sexual Strategies Theory and the desire for sexual variety; evolved standards of beauty; hypothesized adaptations to ovulation; the appeal of risk taking in human mating; understanding the causes of sexual victimization; and the role of studies of lesbian mate preferences in evaluating the framework of evolutionary psychology. Discussion focuses on the importance of social and cultural context, human behavioral flexibility, and the evidentiary status of specific evolutionary psychological hypotheses. We conclude by examining the potential role of evolutionary psychology in addressing social problems identified by feminist agendas.", "title": "" }, { "docid": "066fdb2deeca1d13218f16ad35fe5f86", "text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.", "title": "" }, { "docid": "bd06f693359bba90de59454f32581c9c", "text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.", "title": "" }, { "docid": "59786d8ea951639b8b9a4e60c9d43a06", "text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.", "title": "" }, { "docid": "20d754528009ebce458eaa748312b2fe", "text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.", "title": "" }, { "docid": "2adde1812974f2d5d35d4c7e31ca7247", "text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]", "title": "" }, { "docid": "8caaea6ffb668c019977809773a6d8c5", "text": "In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser–Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38 and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8 .9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. c © 2001 Academic Press", "title": "" }, { "docid": "23a5d1aebe5e2f7dd5ed8dfde17ce374", "text": "Today's workplace often includes workers from 4 distinct generations, and each generation brings a unique set of core values and characteristics to an organization. These generational differences can produce benefits, such as improved patient care, as well as challenges, such as conflict among employees. This article reviews current research on generational differences in educational settings and the workplace and discusses the implications of these findings for medical imaging and radiation therapy departments.", "title": "" }, { "docid": "b317f33d159bddce908df4aa9ba82cf9", "text": "Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this deficiency, a density-based point cloud denoising method is presented to remove outliers and noisy points. First, particle-swam optimization technique is employed for automatically approximating optimal bandwidth of multivariate kernel density estimation to ensure the robust performance of density estimation. Then, mean-shift based clustering technique is used to remove outliers through a thresholding scheme. After removing outliers from the point cloud, bilateral mesh filtering is applied to smooth the remaining points. The experimental results show that this approach, comparably, is robust and efficient.", "title": "" }, { "docid": "b6ff96922a0b8e32236ba8fb44bf4888", "text": "Most people acknowledge that personal computers have enormously enhanced the autonomy and communication capacity of people with special needs. The key factor for accessibility to these opportunities is the adequate design of the user interface which, consequently, has a high impact on the social lives of users with disabilities. The design of universally accessible interfaces has a positive effect over the socialisation of people with disabilities. People with sensory disabilities can profit from computers as a way of personal direct and remote communication. Personal computers can also assist people with severe motor impairments to manipulate their environment and to enhance their mobility by means of, for example, smart wheelchairs. In this way they can become more socially active and productive. Accessible interfaces have become so indispensable for personal autonomy and social inclusion that in several countries special legislation protects people from ‘digital exclusion’. To apply this legislation, inexperienced HCI designers can experience difficulties. They would greatly benefit from inclusive design guidelines in order to be able to implement the ‘design for all’ philosophy. In addition, they need clear criteria to avoid negative social and ethical impact on users. This paper analyses the benefits of the use of inclusive design guidelines in order to facilitate a universal design focus so that social exclusion is avoided. In addition, the need for ethical and social guidelines in order to avoid undesirable side effects for users is discussed. Finally, some preliminary examples of socially and ethically aware guidelines are proposed. q 2005 Elsevier B.V. All rights reserved. Interacting with Computers 17 (2005) 484–505 www.elsevier.com/locate/intcom 0953-5438/$ see front matter q 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.intcom.2005.03.002 * Corresponding author. E-mail address: julio.abascal@ehu.es (J. Abascal). J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 485 1. HCI and people with disabilities Most people living in developed countries have direct or indirect relationships with computers in diverse ways. In addition, there exist many tasks that could hardly be performed without computers, leading to a dependence on Information Technology. Moreover, people not having access to computers can suffer the effects of the so-called digital divide (Fitch, 2002), a new type of social exclusion. People with disabilities are one of the user groups with higher computer dependence because, for many of them, the computer is the only way to perform several vital tasks, such as personal and remote communication, control of the environment, assisted mobility, access to telematic networks and services, etc. Digital exclusion for disabled people means not having full access to a socially active and independent lifestyle. In this way, Human-Computer Interaction (HCI) is playing an important role in the provision of social opportunities to people with disabilities (Abascal and Civit, 2002). 2. HCI and social integration 2.1. Gaining access to computers Computers provide very effective solutions to help people with disabilities to enhance their social integration. For instance, people with severe speech and motor impairments have serious difficulties to communicate with other people and to perform common operations in their close environment (e.g. to handle objects). For them, computers are incredibly useful as alternative communication devices. Messages can be composed using special keyboards (Lesher et al., 1998), scanning with one or two switches, by means of eye tracking (Sibert and Jacob, 2000), etc. Current software techniques also allow the design of methods to enhance the message composition speed. For instance, Artificial Intelligence methods are frequently used to design word prediction aids to assist in the typing of text with minimum effort (Garay et al., 1997). Computers can also assist the disabled user to autonomously control the environment through wireless communication, to drive smart electric powered wheelchairs, to control assistive robotic arms, etc. What is more, the integration of all of these services allows people with disabilities using the same interface to perform all tasks in a similar way (Abascal and Civit, 2001a). This is possible because assistive technologists have devoted much effort to providing disabled people with devices and procedures to enhance or substitute their physical and cognitive functions in order to be able to gain access to computers (Cook and Hussey, 2002). 2.2. Using commercial software When the need of gaining access to a PC is solved, the user faces another problem due to difficulties in using commercial software. Many applications have been designed without taking into account that they can be used by people using Assistive Technology devices, and therefore they may have unnecessary barriers which impede the use of alternative interaction devices. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 486 This is the case for one of the most promising application fields nowadays: the internet. A PC linked to a telematic network opens the door to new remote services that can be crucial for people with disabilities. Services such us tele-teaching, tele-care, tele-working, tele-shopping, etc., may enormously enhance their quality of life. These are just examples of the great interest of gaining access to services provided by means of computers for people with disabilities. However, if these services are not accessible, they are useless for people with disabilities. In addition, even if the services are accessible, that is, the users can actually perform the tasks they wish to, it is also important that users can perform those tasks easily, effectively and efficiently. Usability, therefore, is also a key requirement. 2.3. Social demand for accessibility and usability Two factors, among others, have greatly influenced the social demand for accessible computing. The first factor was the technological revolution produced by the availability of personal computers that became smaller, cheaper, lower in consumption, and easier to use than previous computing machines. In parallel, a social revolution has evolved as a result of the battle against social exclusion ever since disabled people became conscious of their rights and needs. The conjunction of computer technology in the form of inexpensive and powerful personal computers, with the struggle of people with disabilities towards autonomous life and social integration, produced the starting point of a new technological challenge. This trend has been also supported in some countries by laws that prevent technological exclusion of people with disabilities and favour the inclusive use of technology (e.g. the Americans with Disabilities Act in the United States and the Disability Discrimination Act in the United Kingdom). The next sections discuss how this situation influenced the design of user interfaces for people with disabilities. 3. User interfaces for people with disabilities With the popularity of personal computers many technicians realised that they could become an indispensable tool to assist people with disabilities for most necessary tasks. They soon discovered that a key issue was the availability of suitable user interfaces, due to the special requirements of these users. But the variety of needs and the wide diversity of physical, sensory and cognitive characteristics make the design of interfaces very complex. An interesting process has occurred whereby we have moved from a computer ‘patchwork’ situation to the adoption of more structured HCI methodologies. In the next sections, this process is briefly described, highlighting issues that can and should lead to inclusive design guidelines for socially and ethically aware HCI. 1 Americans with Disabilities Act (ADA). Available at http://www.usdoj.gov/crt/ada/adahom1.htm, last accessed January 15, 2005. 2 Disabilty Discrimination Act (DDA). Available at http://www.disability.gov.uk/dda/index.html, last accessed January 15, 2005. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 487 3.1. First approach: adaptation of existing systems For years, the main activity of people working in Assistive Technology was the adaptation of commercially available computers to the capabilities of users with disabilities. Existing computer interaction style was mainly based on a standard keyboard and mouse for input, and output was based on a screen for data, a printer for hard copy, and a ‘bell’ for some warnings and signals. This kind of interface takes for granted the fact that users have the following physical skills: enough sight capacity to read the screen, movement control and strength in the hands to handle the standard keyboard, coordination for mouse use, and also hearing capacity for audible warnings. In addition, cognitive capabilities to read, understand, reason, etc., were also assumed. When one or more of these skills were lacking, conscientious designers would try to substitute them by another capability, or an alternative way of communication. For instance, blind users could hear the content of the screen when it was read aloud by a textto-voice translator. Alternatively, output could be directed to a Braille printer, or matrix of pins. Thus, adaptation was done in the following way: first, detecting the barriers to gain access to the computer by a user or a group of users, and then, providing them with an alternative way based on the abilities and skills present in this group of users. This procedure often succeeded, producing very useful alternative ways to use computers. Nevertheless, some drawbacks were detected: † Lack of generality: the smaller the group of users the design is focused on, the better results were obtained. Therefore, different systems had to be designed to fit the needs of us", "title": "" }, { "docid": "d72092cd909d88e18598925024dc6b97", "text": "This paper focuses on the robust dissipative fault-tolerant control problem for one kind of Takagi-Sugeno (T-S) fuzzy descriptor system with actuator failures. The solvable conditions of the robust dissipative fault-tolerant controller are given by using of the Lyapunov theory, Lagrange interpolation polynomial theory, etc. These solvable conditions not only make the closed loop system dissipative, but also integral for the actuator failure situation. The dissipative fault-tolerant controller design methods are given by the aid of the linear matrix inequality toolbox, the function of randomly generated matrix, loop statement, and numerical solution, etc. Thus, simulation process is fully intelligent and efficient. At the same time, the design methods are also obtained for the passive and H∞ fault-tolerant controllers. This explains the fact that the dissipative control unifies H∞ control and passive control. Finally, we give example that illustrates our results.", "title": "" }, { "docid": "446a7404a0e4e78156532fcb93270475", "text": "Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.", "title": "" }, { "docid": "14f539b7c27aeb96025045a660416e39", "text": "This paper describes a method for the automatic self-calibration of a 3D Laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximising point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over thirty test sites.", "title": "" } ]
scidocsrr
4c7b94f0e7470fdd5d62b4174ecb3c7c
Please Share! Online Word of Mouth and Charitable Crowdfunding
[ { "docid": "befc5dbf4da526963f8aa180e1fda522", "text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities.  1998 Elsevier Science S.A.", "title": "" } ]
[ { "docid": "976f16e21505277525fa697876b8fe96", "text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.", "title": "" }, { "docid": "16f96e68b19fb561d2232ea4e586bb2e", "text": "In this letter, charge-based capacitance measurement (CBCM) is applied to characterize bias-dependent capacitances in a CMOS transistor. Due to its special advantage of being free from the errors induced by charge injection, the operation of charge-injection-induced-error-free CBCM allows for the extraction of full-range gate capacitance from the accumulation region to the inversion region and the overlap capacitance of MOSFET devices with submicrometer dimensions.", "title": "" }, { "docid": "c17522f4b9f3b229dae56b394adb69a1", "text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.", "title": "" }, { "docid": "ec36f5a41650cc6c3ba17eb6bd928677", "text": "Deep learning techniques based on Convolutional Neural Networks (CNNs) are extensively used for the classification of hyperspectral images. These techniques present high computational cost. In this paper, a GPU (Graphics Processing Unit) implementation of a spatial-spectral supervised classification scheme based on CNNs and applied to remote sensing datasets is presented. In particular, two deep learning libraries, Caffe and CuDNN, are used and compared. In order to achieve an efficient GPU projection, different techniques and optimizations have been applied. The implemented scheme comprises Principal Component Analysis (PCA) to extract the main features, a patch extraction around each pixel to take the spatial information into account, one convolutional layer for processing the spectral information, and fully connected layers to perform the classification. To improve the initial GPU implementation accuracy, a second convolutional layer has been added. High speedups are obtained together with competitive classification accuracies.", "title": "" }, { "docid": "83da776714bf49c3bbb64976d20e26a2", "text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.", "title": "" }, { "docid": "3251674643f09b73a24d037dc1076c72", "text": "Although the link between sagittal plane motion and exercise intensity has been highlighted, no study assessed if different workloads lead to changes in three-dimensional cycling kinematics. This study compared three-dimensional joint and segment kinematics between competitive and recreational road cyclists across different workloads. Twenty-four road male cyclists (12 competitive and 12 recreational) underwent an incremental workload test to determine aerobic peak power output. In a following session, cyclists performed four trials at sub-maximal workloads (65, 75, 85 and 95% of their aerobic peak power output) at 90 rpm of pedalling cadence. Mean hip adduction, thigh rotation, shank rotation, pelvis inclination (latero-lateral and anterior-posterior), spine inclination and rotation were computed at the power section of the crank cycle (12 o'clock to 6 o'clock crank positions) using three-dimensional kinematics. Greater lateral spine inclination (p < .01, 5-16%, effect sizes = 0.09-0.25) and larger spine rotation (p < .01, 16-29%, effect sizes = 0.31-0.70) were observed for recreational cyclists than competitive cyclists across workload trials. No differences in segment and joint angles were observed from changes in workload with significant individual effects on spine inclination (p < .01). No workload effects were found in segment angles but differences, although small, existed when comparing competitive road to recreational cyclists. When conducting assessment of joint and segment motions, workload between 65 and 95% of individual cyclists' peak power output could be used.", "title": "" }, { "docid": "1e80983e98d5d94605315b8ef45af0fd", "text": "Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present Population Based Training (PBT), a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.", "title": "" }, { "docid": "e77cf8938714824d46cfdbdb1b809f93", "text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.", "title": "" }, { "docid": "9fa8133dcb3baef047ee887fea1ed5a3", "text": "In this paper, we present an effective hierarchical shot classification scheme for broadcast soccer video. We first partition a video into replay and non-replay shots with replay logo detection. Then, non-replay shots are further classified into Long, Medium, Close-up or Out-field types with color and texture features based on a decision tree. We tested the method on real broadcast FIFA soccer videos, and the experimental results demonstrate its effectiveness..", "title": "" }, { "docid": "3d3589a002f8195bb20324dd8a8f5d76", "text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "541de3d6af2edacf7396e5ca66c385e2", "text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.", "title": "" }, { "docid": "dac8564305055eaf9291e731dbf9a44d", "text": "Named Entity Recognition and classification (NERC) is an essential and challenging task in (NLP). Kann ada is a highly inflectional and agglutinating language prov iding one of the richest and most challenging sets of linguistic and statistical features resulting in long and complex word forms, which is large in number. It is primarily a suffixi ng Language and inflected word starts with a root and may have several suffix es added to the right. It is also a Freeword order Language. Like other Indian languages, it is a resource poor language. Annotate d corpora, name dictionaries, good morphological an lyzers, Parts of Speech (POS) taggers etc. are not yet available in the req ui d measure and not many works are reported for t his language. The work related to NERC in Kannada is not yet reported. In recent years, automatic named entity recognition an d extraction systems have become one of the popular research areas. Building NERC for Kannada is challenging. It seeks to classi fy words which represent names in text into predefined categories like perso n name, location, organization, date, time etc. Thi s paper deals with some attempts in this direction. This work starts with e xp riments in building Semi-Automated Statistical M achine learning NLP Models based on Noun Taggers. In this paper we have de loped an algorithm based on supervised learnin g techniques that include Hidden Markov Model (HMM). Some sample resu lts are reported.", "title": "" }, { "docid": "055c9fad6d2f246fc1b6cbb1bce26a92", "text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.", "title": "" }, { "docid": "caac45f02e29295d592ee784697c6210", "text": "The studies included in this PhD thesis examined the interactions of syphilis, which is caused by Treponema pallidum, and HIV. Syphilis reemerged worldwide in the late 1990s and hereafter increasing rates of early syphilis were also reported in Denmark. The proportion of patients with concurrent HIV has been substantial, ranging from one third to almost two thirds of patients diagnosed with syphilis some years. Given that syphilis facilitates transmission and acquisition of HIV the two sexually transmitted diseases are of major public health concern. Further, syphilis has a negative impact on HIV infection, resulting in increasing viral loads and decreasing CD4 cell counts during syphilis infection. Likewise, HIV has an impact on the clinical course of syphilis; patients with concurrent HIV are thought to be at increased risk of neurological complications and treatment failure. Almost ten per cent of Danish men with syphilis acquired HIV infection within five years after they were diagnosed with syphilis during an 11-year study period. Interestingly, the risk of HIV declined during the later part of the period. Moreover, HIV-infected men had a substantial increased risk of re-infection with syphilis compared to HIV-uninfected men. As one third of the HIV-infected patients had viral loads >1,000 copies/ml, our conclusion supported the initiation of cART in more HIV-infected MSM to reduce HIV transmission. During a five-year study period, including the majority of HIV-infected patients from the Copenhagen area, we observed that syphilis was diagnosed in the primary, secondary, early and late latent stage. These patients were treated with either doxycycline or penicillin and the rate of treatment failure was similar in the two groups, indicating that doxycycline can be used as a treatment alternative - at least in an HIV-infected population. During a four-year study period, the T. pallidum strain type distribution was investigated among patients diagnosed by PCR testing of material from genital lesions. In total, 22 strain types were identified. HIV-infected patients were diagnosed with nine different strains types and a difference by HIV status was not observed indicating that HIV-infected patients did not belong to separate sexual networks. In conclusion, concurrent HIV remains common in patients diagnosed with syphilis in Denmark, both in those diagnosed by serological testing and PCR testing. Although the rate of syphilis has stabilized in recent years, a spread to low-risk groups is of concern, especially due to the complex symptomatology of syphilis. However, given the efficient treatment options and the targeted screening of pregnant women and persons at higher risk of syphilis, control of the infection seems within reach. Avoiding new HIV infections is the major challenge and here cART may play a prominent role.", "title": "" }, { "docid": "dc3de555216f10d84890ecb1165774ff", "text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" }, { "docid": "6dbe972f08097355b32685c5793f853a", "text": "BACKGROUND/AIMS\nRheumatoid arthritis (RA) is a serious health problem resulting in significant morbidity and disability. Tai Chi may be beneficial to patients with RA as a result of effects on muscle strength and 'mind-body' interactions. To obtain preliminary data on the effects of Tai Chi on RA, we conducted a pilot randomized controlled trial. Twenty patients with functional class I or II RA were randomly assigned to Tai Chi or attention control in twice-weekly sessions for 12 weeks. The American College of Rheumatology (ACR) 20 response criterion, functional capacity, health-related quality of life and the depression index were assessed.\n\n\nRESULTS\nAt 12 weeks, 5/10 patients (50%) randomized to Tai Chi achieved an ACR 20% response compared with 0/10 (0%) in the control (p = 0.03). Tai Chi had greater improvement in the disability index (p = 0.01), vitality subscale of the Medical Outcome Study Short Form 36 (p = 0.01) and the depression index (p = 0.003). Similar trends to improvement were also observed for disease activity, functional capacity and health-related quality of life. No adverse events were observed and no patients withdrew from the study.\n\n\nCONCLUSION\nTai Chi appears safe and may be beneficial for functional class I or II RA. These promising results warrant further investigation into the potential complementary role of Tai Chi for treatment of RA.", "title": "" }, { "docid": "38a8471eb20b08499136ef459eb866c2", "text": "Some recent studies suggest that in progressive multiple sclerosis, neurodegeneration may occur independently from inflammation. The aim of our study was to analyse the interdependence of inflammation, neurodegeneration and disease progression in various multiple sclerosis stages in relation to lesional activity and clinical course, with a particular focus on progressive multiple sclerosis. The study is based on detailed quantification of different inflammatory cells in relation to axonal injury in 67 multiple sclerosis autopsies from different disease stages and 28 controls without neurological disease or brain lesions. We found that pronounced inflammation in the brain is not only present in acute and relapsing multiple sclerosis but also in the secondary and primary progressive disease. T- and B-cell infiltrates correlated with the activity of demyelinating lesions, while plasma cell infiltrates were most pronounced in patients with secondary progressive multiple sclerosis (SPMS) and primary progressive multiple sclerosis (PPMS) and even persisted, when T- and B-cell infiltrates declined to levels seen in age matched controls. A highly significant association between inflammation and axonal injury was seen in the global multiple sclerosis population as well as in progressive multiple sclerosis alone. In older patients (median 76 years) with long-disease duration (median 372 months), inflammatory infiltrates declined to levels similar to those found in age-matched controls and the extent of axonal injury, too, was comparable with that in age-matched controls. Ongoing neurodegeneration in these patients, which exceeded the extent found in normal controls, could be attributed to confounding pathologies such as Alzheimer's or vascular disease. Our study suggests a close association between inflammation and neurodegeneration in all lesions and disease stages of multiple sclerosis. It further indicates that the disease processes of multiple sclerosis may die out in aged patients with long-standing disease.", "title": "" }, { "docid": "e75620184f4baca454af714daf5e7801", "text": "Although fingerprint experts have presented evidence in criminal courts for more than a century, there have been few scientific investigations of the human capacity to discriminate these patterns. A recent latent print matching experiment shows that qualified, court-practicing fingerprint experts are exceedingly accurate (and more conservative) compared with novices, but they do make errors. Here, a rationale for the design of this experiment is provided. We argue that fidelity, generalizability, and control must be balanced to answer important research questions; that the proficiency and competence of fingerprint examiners are best determined when experiments include highly similar print pairs, in a signal detection paradigm, where the ground truth is known; and that inferring from this experiment the statement \"The error rate of fingerprint identification is 0.68%\" would be unjustified. In closing, the ramifications of these findings for the future psychological study of forensic expertise and the implications for expert testimony and public policy are considered.", "title": "" } ]
scidocsrr
2c6848e03b871a46c9228a2951dc7f4f
Analysis of Social Networks Using the Techniques of Web Mining
[ { "docid": "bed9bdf4d4965610b85378f2fdbfab2a", "text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.", "title": "" } ]
[ { "docid": "ed9f79cab2dfa271ee436b7d6884bc13", "text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.", "title": "" }, { "docid": "6614eeffe9fb332a028b1e80aa24016a", "text": "Advances in microelectronics, array processing, and wireless networking, have motivated the analysis and design of low-cost integrated sensing, computating, and communicating nodes capable of performing various demanding collaborative space-time processing tasks. In this paper, we consider the problem of coherent acoustic sensor array processing and localization on distributed wireless sensor networks. We first introduce some basic concepts of beamforming and localization for wideband acoustic sources. A review of various known localization algorithms based on time-delay followed by LS estimations as well as maximum likelihood method is given. Issues related to practical implementation of coherent array processing including the need for fine-grain time synchronization are discussed. Then we describe the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization. Various field-measured results using two localization algorithms show the effectiveness of the proposed testbed. An extensive list of references related to this work is also included. Keywords— Beamforming, Source Localization, Distributed Sensor Network, Wireless Network, Ad Hoc Network, Microphone Array, Time Synchronization.", "title": "" }, { "docid": "805583da675c068b7cc2bca80e918963", "text": "Designing an actuator system for highly dynamic legged robots has been one of the grand challenges in robotics research. Conventional actuators for manufacturing applications have difficulty satisfying design requirements for high-speed locomotion, such as the need for high torque density and the ability to manage dynamic physical interactions. To address this challenge, this paper suggests a proprioceptive actuation paradigm that enables highly dynamic performance in legged machines. Proprioceptive actuation uses collocated force control at the joints to effectively control contact interactions at the feet under dynamic conditions. Modal analysis of a reduced leg model and dimensional analysis of DC motors address the main principles for implementation of this paradigm. In the realm of legged machines, this paradigm provides a unique combination of high torque density, high-bandwidth force control, and the ability to mitigate impacts through backdrivability. We introduce a new metric named the “impact mitigation factor” (IMF) to quantify backdrivability at impact, which enables design comparison across a wide class of robots. The MIT Cheetah leg is presented, and is shown to have an IMF that is comparable to other quadrupeds with series springs to handle impact. The design enables the Cheetah to control contact forces during dynamic bounding, with contact times down to 85 ms and peak forces over 450 N. The unique capabilities of the MIT Cheetah, achieving impact-robust force-controlled operation in high-speed three-dimensional running and jumping, suggest wider implementation of this holistic actuation approach.", "title": "" }, { "docid": "c2b41a637cdc46abf0e154368a5990df", "text": "Ideally, the time that an incremental algorithm uses to process a change should be a fimction of the size of the change rather than, say, the size of the entire current input. Based o n a formalization of \"the set of things changed\" by an increInental modification, this paper investigates how and to what extent it is possibh~' to give such a guarantee for a chart-ba.se(l parsing frmnework and discusses the general utility of a tninlmality notion in incremental processing) 1 I n t r o d u c t i o n", "title": "" }, { "docid": "cd1a5d05e1991accd0a733ae0f2b7afc", "text": "This paper presents the application of an embedded camera system for detecting laser spot in the shooting simulator. The proposed shooting simulator uses a specific target box, where the circular pattern target is mounted. The embedded camera is installed inside the box to capture the circular pattern target and laser spot image. To localize the circular pattern automatically, two colored solid circles are painted on the target. This technique allows the simple and fast color tracking to track the colored objects for localizing the circular pattern. The CMUCam4 is employed as the embedded camera. It is able to localize the target and detect the laser spot in real-time at 30 fps. From the experimental results, the errors in calculating shooting score and detecting laser spot are 3.82% and 0.68% respectively. Further the proposed system provides the more accurate scoring system in real number compared to the conventional integer number.", "title": "" }, { "docid": "691f5f53582ceedaa51812307778b4db", "text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers", "title": "" }, { "docid": "867516a6a54105e4759338e407bafa5a", "text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.", "title": "" }, { "docid": "0cd42818f21ada2a8a6c2ed7a0f078fe", "text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.", "title": "" }, { "docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7", "text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.", "title": "" }, { "docid": "024e4eebc8cb23d85676df920316f62c", "text": "E-voting technology has been developed for more than 30 years. However it is still distance away from serious application. The major challenges are to provide a secure solution and to gain trust from the voters in using it. In this paper we try to present a comprehensive review to e-voting by looking at these challenges. We summarized the vast amount of security requirements named in the literature that allows researcher to design a secure system. We reviewed some of the e-voting systems found in the real world and the literature. We also studied how a e-voting system can be usable by looking at different usability research conducted on e-voting. Summarizes on different cryptographic tools in constructing e-voting systems are also presented in the paper. We hope this paper can served as a good introduction for e-voting researches.", "title": "" }, { "docid": "22cdfb6170fab44905a8f79b282a1313", "text": "CONTEXT\nInteprofessional collaboration (IPC) between biomedically trained doctors (BMD) and traditional, complementary and alternative medicine practitioners (TCAMP) is an essential element in the development of successful integrative healthcare (IHC) services. This systematic review aims to identify organizational strategies that would facilitate this process.\n\n\nMETHODS\nWe searched 4 international databases for qualitative studies on the theme of BMD-TCAMP IPC, supplemented with a purposive search of 31 health services and TCAM journals. Methodological quality of included studies was assessed using published checklist. Results of each included study were synthesized using a framework approach, with reference to the Structuration Model of Collaboration.\n\n\nFINDINGS\nThirty-seven studies of acceptable quality were included. The main driver for developing integrative healthcare was the demand for holistic care from patients. Integration can best be led by those trained in both paradigms. Bridge-building activities, positive promotion of partnership and co-location of practices are also beneficial for creating bonding between team members. In order to empower the participation of TCAMP, the perceived power differentials need to be reduced. Also, resources should be committed to supporting team building, collaborative initiatives and greater patient access. Leadership and funding from central authorities are needed to promote the use of condition-specific referral protocols and shared electronic health records. More mature IHC programs usually formalize their evaluation process around outcomes that are recognized both by BMD and TCAMP.\n\n\nCONCLUSIONS\nThe major themes emerging from our review suggest that successful collaborative relationships between BMD and TCAMP are similar to those between other health professionals, and interventions which improve the effectiveness of joint working in other healthcare teams with may well be transferable to promote better partnership between the paradigms. However, striking a balance between the different practices and preserving the epistemological stance of TCAM will remain the greatest challenge in successful integration.", "title": "" }, { "docid": "b3af820192d34b6066498e04b9a51e31", "text": "Nowadays there are studies in different fields aimed to extract relevant information on trends, challenges and opportunities; all these studies have something in common: they work with large volumes of data. This work analyzes different studies carried out on the use of Machine Learning (ML) for processing large volumes of data (Big Data). Most of these datasets, are complex and come from various sources with structured or unstructured data. For this reason, it is necessary to find mechanisms that allow classification and, in a certain way, organize them to facilitate to the users the extraction of the required information. The processing of these data requires the use of classification techniques that will also be reviewed.", "title": "" }, { "docid": "10b7ce647229f3c9fe5aeced5be85e38", "text": "The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent.", "title": "" }, { "docid": "f02bd91e8374506aa4f8a2107f9545e6", "text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7bbfafb6de6ccd50a4a708af76588beb", "text": "In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.", "title": "" }, { "docid": "30e287e44e66e887ad5d689657e019c3", "text": "OBJECTIVE\nThe purpose of this study was to determine whether the Sensory Profile discriminates between children with and without autism and which items on the profile best discriminate between these groups.\n\n\nMETHOD\nParents of 32 children with autism aged 3 to 13 years and of 64 children without autism aged 3 to 10 years completed the Sensory Profile. A descriptive analysis of the data set of children with autism identified the distribution of responses on each item. A multivariate analysis of covariance (MANCOVA) of each category of the Sensory Profile identified possible differences among subjects without autism, with mild or moderate autism, and with severe autism. Follow-up univariate analyses were conducted for any category that yielded a significant result on the MANCOVA:\n\n\nRESULTS\nEight-four of 99 items (85%) on the Sensory Profile differentiated the sensory processing skills of subjects with autism from those without autism. There were no group differences between subjects with mild or moderate autism and subjects with severe autism.\n\n\nCONCLUSION\nThe Sensory Profile can provide information about the sensory processing skills of children with autism to assist occupational therapists in assessing and planning intervention for these children.", "title": "" }, { "docid": "510439267c11c53b31dcf0b1c40e331b", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "09fc272a6d9ea954727d07075ecd5bfd", "text": "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
99cdb216e60bc17be1564c374d39ccd8
Comparing Performances of Big Data Stream Processing Platforms with RAM3S
[ { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" } ]
[ { "docid": "11a4536e40dde47e024d4fe7541b368c", "text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.", "title": "" }, { "docid": "baeddccc34585796fec12659912a757e", "text": "Recurrent neural networks (RNNs) have shown success for many sequence-modeling tasks, but learning long-term dependencies from data remains difficult. This is often attributed to the vanishing gradient problem, which shows that gradient components relating a loss at time t to time t− τ tend to decay exponentially with τ . Long short-term memory (LSTM) and gated recurrent units (GRUs), the most widely-used RNN architectures, attempt to remedy this problem by making the decay’s base closer to 1. NARX RNNs1 take an orthogonal approach: by including direct connections, or delays, from the past, NARX RNNs make the decay’s exponent closer to 0. However, as introduced, NARX RNNs reduce the decay’s exponent only by a factor of nd, the number of delays, and simultaneously increase computation by this same factor. We introduce a new variant of NARX RNNs, called MIxed hiSTory RNNs, which addresses these drawbacks. We show that for τ ≤ 2nd−1, MIST RNNs reduce the decay’s worst-case exponent from τ/nd to log τ , while maintaining computational complexity that is similar to LSTM and GRUs. We compare MIST RNNs to simple RNNs, LSTM, and GRUs across 4 diverse tasks. MIST RNNs outperform all other methods in 2 cases, and in all cases are competitive.", "title": "" }, { "docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "9e310ac4876eee037e0d5c2a248f6f45", "text": "The self-balancing two-wheel chair (SBC) is an unconventional type of personal transportation vehicle. It has unstable dynamics and therefore requires a special control to stabilize and prevent it from falling and to ensure the possibility of speed control and steering by the rider. This paper discusses the dynamic modeling and controller design for the system. The model of SBC is based on analysis of the motions of the inverted pendulum on a mobile base complemented with equations of the wheel motion and motor dynamics. The proposed control design involves a multi-loop PID control. Experimental verification and prototype implementation are discussed.", "title": "" }, { "docid": "5233286436f0ecfde8e0e647e89b288f", "text": "Each employee’s performance is important in an organization. A way to motivate it is through the application of reinforcement theory which is developed by B. F. Skinner. One of the most commonly used methods is positive reinforcement in which one’s behavior is strengthened or increased based on consequences. This paper aims to review the impact of positive reinforcement on the performances of employees in organizations. It can be applied by utilizing extrinsic reward or intrinsic reward. Extrinsic rewards include salary, bonus and fringe benefit while intrinsic rewards are praise, encouragement and empowerment. By applying positive reinforcement in these factors, desired positive behaviors are encouraged and negative behaviors are eliminated. Financial and non-financial incentives have a positive relationship with the efficiency and effectiveness of staffs.", "title": "" }, { "docid": "6038975e7868b235f2b665ffbd249b68", "text": "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks&#x2014;pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.", "title": "" }, { "docid": "301aee8363dffd7ae4c7ac2945a55842", "text": "This work studies the usage of the Deep Neural Network (DNN) Bottleneck (BN) features together with the traditional MFCC features in the task of i-vector-based speaker recognition. We decouple the sufficient statistics extraction by using separate GMM models for frame alignment, and for statistics normalization and we analyze the usage of BN and MFCC features (and their concatenation) in the two stages. We also show the effect of using full-covariance GMM models, and, as a contrast, we compare the result to the recent DNN-alignment approach. On the NIST SRE2010, telephone condition, we show 60% relative gain over the traditional MFCC baseline for EER (and similar for the NIST DCF metrics), resulting in 0.94% EER.", "title": "" }, { "docid": "9b30a07edc14ed2d1132421d8f372cd2", "text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.", "title": "" }, { "docid": "b7c4d8b946ea6905a2f0da10e6dc9de6", "text": "We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.", "title": "" }, { "docid": "bd06f693359bba90de59454f32581c9c", "text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.", "title": "" }, { "docid": "84c95e15ddff06200624822cc12fa51f", "text": "A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1.", "title": "" }, { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "60ec8f06cdd4bf7cb27565c6d576ff40", "text": "2.5D chips with TSV and interposer are becoming the most popular packaging method with great increased flexibility and integrated functionality. However, great challenges have been posed in the failure analysis process to precisely locate the failure point of each interconnection in ultra-small size. The electro-optic sampling (EOS) based pulsed Time-domain reflectometry (TDR) is a powerful tool for the 2.5D/3D package diagnostics with greatly increased I/O speed and density. The timing of peaks in the reflected waveform accurately reveals the faulty location. In this work, 2.5D chip with known open failure location has been analyzed by a EOS based TDR system.", "title": "" }, { "docid": "5ad696a08b236e200a96589780b2b06c", "text": "The need for increasing flexibility of industrial automation system products leads to the trend of shifting functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a required (high) quality level. Nevertheless, key tasks in software testing include identifying appropriate test cases that typically require a high effort for (a) test case generation/construction and (b) test case modification in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications where the test cases are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated generation of test cases and code for industrial automation systems.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "7edd1ae4ec4bac9ed91e5e14326a694e", "text": "These days, educational institutions and organizations are generating huge amount of data, more than the people can read in their lifetime. It is not possible for a person to learn, understand, decode, and interpret to find valuable information. Data mining is one of the most popular method which can be used to identify hidden patterns from large databases. User can extract historical, hidden details, and previously unknown information, from large repositories by applying required mining techniques. There are two algorithms which can be used to classify and predict, such as supervised learning and unsupervised learning. Classification is a technique which performs an induction on current data (existing data) and predicts future class. The main objective of classification is to make an unknown class to known class by consulting its neighbor class. therefore it is called as supervised learning, it builds the classifier by consulting with the known class labels such as k-nearest neighbor algorithm (k-NN), Naïve Bayes (NB), support vector machine (SVM), decision tree. Clustering is an unsupervised learning that builds a model to group similar objects into categories without consulting a class label. The main objective of clustering is find the distance between objects like nearby and faraway based on their similarities and dissimilarities it groups the objects and detects outliers. In this paper Weka tool is used to analyze by applying preprocessing, classification on institutional academic result of under graduate students of computer science & engineering. Keywords— Weka, classifier, supervised learning,", "title": "" }, { "docid": "7c13ebe2897fc4870a152159cda62025", "text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.", "title": "" }, { "docid": "36c11c29f6605f7c234e68ecba2a717a", "text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.", "title": "" }, { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" }, { "docid": "9f5998ebc2457c330c29a10772d8ee87", "text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.", "title": "" } ]
scidocsrr
9af8e7dc3fea72d4cc8a202a17ebf31e
Personalization Method for Tourist Point of Interest (POI) Recommendation
[ { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" } ]
[ { "docid": "f78d0dae400b331d6dcb4de9d10ca2f0", "text": "How ontologies provide the semantics, as explained here with the help of Harry Potter and his owl Hedwig.", "title": "" }, { "docid": "2579a6082d157d8b9940b3ca8084f741", "text": "In general, conventional Arbiter-based Physically Unclonable Functions (PUFs) generate responses with low unpredictability. The N-XOR Arbiter PUF, proposed in 2007, is a well-known technique for improving this unpredictability. In this paper, we propose a novel design for Arbiter PUF, called Double Arbiter PUF, to enhance the unpredictability on field programmable gate arrays (FPGAs), and we compare our design to conventional N-XOR Arbiter PUFs. One metric for judging the unpredictability of responses is to measure their tolerance to machine-learning attacks. Although our previous work showed the superiority of Double Arbiter PUFs regarding unpredictability, its details were not clarified. We evaluate the dependency on the number of training samples for machine learning, and we discuss the reason why Double Arbiter PUFs are more tolerant than the N-XOR Arbiter PUFs by evaluating intrachip variation. Further, the conventional Arbiter PUFs and proposed Double Arbiter PUFs are evaluated according to other metrics, namely, their uniqueness, randomness, and steadiness. We demonstrate that 3-1 Double Arbiter PUF archives the best performance overall.", "title": "" }, { "docid": "597b893e42df1bfba3d17b2d3ec31539", "text": "Genetic Programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard real-world problems. Lately, there has been considerable interest in GP's community to develop semantic genetic operators, i.e., operators that work on the phenotype. In this contribution, we describe EvoDAG (Evolving Directed Acyclic Graph) which is a Python library that implements a steady-state semantic Genetic Programming with tournament selection using an extension of our previous crossover operators based on orthogonal projections in the phenotype space. To show the effectiveness of EvoDAG, it is compared against state-of-the-art classifiers on different benchmark problems, experimental results indicate that EvoDAG is very competitive.", "title": "" }, { "docid": "ba291f7d938f73946969476fdc96f0df", "text": "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results.", "title": "" }, { "docid": "a9201c32c903eba5cc25a744134a1c3c", "text": "This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.", "title": "" }, { "docid": "2c48dfb1ea7bc0defbe1643fa4708614", "text": "Text in natural images is an important source of information, which can be utilized for many real-world applications. This work focuses on a new problem: distinguishing images that contain text from a large volume of natural images. To address this problem, we propose a novel convolutional neural network variant, called Multi-scale Spatial Partition Network (MSP-Net). The network classifies images that contain text or not, by predicting text existence in all image blocks, which are spatial partitions at multiple scales on an input image. The whole image is classified as a text image (an image containing text) as long as one of the blocks is predicted to contain text. The network classifies images very efficiently by predicting all blocks simultaneously in a single forward propagation. Through experimental evaluations and comparisons on public datasets, we demonstrate the effectiveness and robustness of the proposed method.", "title": "" }, { "docid": "4bce72901777783578637fc6bfeb6267", "text": "This study examines the causal relationship between carbon dioxide emissions, electricity consumption and economic growth within a panel vector error correction model for five ASEAN countries over the period 1980 to 2006. The long-run estimates indicate that there is a statistically significant positive association between electricity consumption and emissions and a non-linear relationship between emissions and real output, consistent with the Environmental Kuznets Curve. The long-run estimates, however, do not indicate the direction of causality between the variables. The results from the Granger causality tests suggest that in the long-run there is unidirectional Granger causality running from electricity consumption and emissions to economic growth. The results also point to unidirectional Granger causality running from emissions to electricity consumption in the short-run.", "title": "" }, { "docid": "4468a8d7f01c1b3e6adcf316bdc34f81", "text": "Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government.", "title": "" }, { "docid": "4f64b2b2b50de044c671e3d0d434f466", "text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …", "title": "" }, { "docid": "7e557091d8cfe6209b1eda3b664ab551", "text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-", "title": "" }, { "docid": "5b1241edf4a9853614a18139323f74eb", "text": "This paper presents a W-band SPDT switch implemented using PIN diodes in a new 90 nm SiGe BiCMOS technology. The SPDT switch achieves a minimum insertion loss of 1.4 dB and an isolation of 22 dB at 95 GHz, with less than 2 dB insertion loss from 77-134 GHz, and greater than 20 dB isolation from 79-129 GHz. The input and output return losses are greater than 10 dB from 73-133 GHz. By reverse biasing the off-state PIN diodes, the P1dB is larger than +24 dBm. To the authors' best knowledge, these results demonstrate the lowest loss and highest power handling capability achieved by a W-band SPDT switch in any silicon-based technology reported to date.", "title": "" }, { "docid": "d88059813c4064ec28c58a8ab23d3030", "text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.", "title": "" }, { "docid": "0c0d0b6d4697b1a0fc454b995bcda79a", "text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.", "title": "" }, { "docid": "7343d29bfdc1a4466400f8752dce4622", "text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.", "title": "" }, { "docid": "2b71cfacf2b1e0386094711d8b326ff7", "text": "In-car navigation systems are designed with effectiveness and efficiency (e.g., guiding accuracy) in mind. However, finding a way and discovering new places could also be framed as an adventurous, stimulating experience for the driver and passengers. Inspired by Gaver and Martin's (2000) notion of \"ambiguity and detour\" and Hassenzahl's (2010) Experience Design, we built ExplorationRide, an in-car navigation system to foster exploration. An empirical in situ exploration demonstrated the system's ability to create an exploration experience, marked by a relaxed at-mosphere, a loss of sense of time, excitement about new places and an intensified relationship with the landscape.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "53518256d6b4f3bb4e8dcf28a35f9284", "text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.", "title": "" }, { "docid": "91f45641d96b519dd65bf00249571a99", "text": "Tissue perfusion is determined by both blood vessel geometry and the rheological properties of blood. Blood is a nonNewtonian fluid, its viscosity being dependent on flow conditions. Blood and plasma viscosities, as well as the rheological properties of blood cells (e.g., deformability and aggregation of red blood cells), are influenced by disease processes and extreme physiological conditions. These rheological parameters may in turn affect the blood flow in vessels, and hence tissue perfusion. Unfortunately it is not always possible to determine if a change in rheological parameters is the cause or the result of a disease process. The hemorheology-tissue perfusion relationship is further complicated by the distinct in vivo behavior of blood. Besides the special hemodynamic mechanisms affecting the composition of blood in various regions of the vascular system, autoregulation based on vascular control mechanisms further complicates this relationship. Hemorheological parameters may be especially important for adequate tissue perfusion if the vascular system is geometrically challenged.", "title": "" }, { "docid": "dd34e763b3fdf0a0a903b773fe1a84be", "text": "Natural language processing (NLP) is a vibrant field of interdisciplinary Computer Science research. Ultimately, NLP seeks to build intelligence into software so that software will be able to process a natural language as skillfully and artfully as humans. Prolog, a general purpose logic programming language, has been used extensively to develop NLP applications or components thereof. This report is concerned with introducing the interested reader to the broad field of NLP with respect to NLP applications that are built in Prolog or from Prolog components.", "title": "" }, { "docid": "27ba6cfdebdedc58ab44b75a15bbca05", "text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.", "title": "" } ]
scidocsrr
1616d9820e6a65a060b577fc5f486c03
Energy Cloud: Real-Time Cloud-Native Energy Management System to Monitor and Analyze Energy Consumption in Multiple Industrial Sites
[ { "docid": "a44b74738723580f4056310d6856bb74", "text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....", "title": "" }, { "docid": "bc8fe59fbfafebaa3c104e35acd632a2", "text": "In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth `V' of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three `V's, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community.", "title": "" } ]
[ { "docid": "b21ae248eea30b91e41012ab70cb6d81", "text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.", "title": "" }, { "docid": "85b99b2c7b209f41b539b0d1041742fd", "text": "Depth maps, characterizing per-pixel physical distance between objects in a 3D scene and a capturing camera, can now be readily acquired using inexpensive active sensors such as Microsoft Kinect. However, the acquired depth maps are often corrupted due to surface reflection or sensor noise. In this paper, we build on two previously developed works in the image denoising literature to restore single depth maps-i.e., to jointly exploit local smoothness and nonlocal self-similarity of a depth map. Specifically, we propose to first cluster similar patches in a depth image and compute an average patch, from which we deduce a graph describing correlations among adjacent pixels. Then we transform similar patches to the same graph-based transform (GBT) domain, where the GBT basis vectors are learned from the derived correlation graph. Finally, we perform an iterative thresholding procedure in the GBT domain to enforce group sparsity. Experimental results show that for single depth maps corrupted with additive white Gaussian noise (AWGN), our proposed NLGBT denoising algorithm can outperform state-of-the-art image denoising methods such as BM3D by up to 2.37dB in terms of PSNR.", "title": "" }, { "docid": "7e4e5472e5ee0b25511975f3422d2173", "text": "Most people with Parkinson's disease (PD) fall and many experience recurrent falls. The aim of this review was to examine the scope of recurrent falls and to identify factors associated with recurrent fallers. A database search for journal articles which reported prospectively collected information concerning recurrent falls in people with PD identified 22 studies. In these studies, 60.5% (range 35 to 90%) of participants reported at least one fall, with 39% (range 18 to 65%) reporting recurrent falls. Recurrent fallers reported an average of 4.7 to 67.6 falls per person per year (overall average 20.8 falls). Factors associated with recurrent falls include: a positive fall history, increased disease severity and duration, increased motor impairment, treatment with dopamine agonists, increased levodopa dosage, cognitive impairment, fear of falling, freezing of gait, impaired mobility and reduced physical activity. The wide range in the frequency of recurrent falls experienced by people with PD suggests that it would be beneficial to classify recurrent fallers into sub-groups based on fall frequency. Given that there are several factors particularly associated with recurrent falls, fall management and prevention strategies specifically targeting recurrent fallers require urgent evaluation in order to inform clinical practice.", "title": "" }, { "docid": "826e54e8e46dcea0451b53645e679d55", "text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.", "title": "" }, { "docid": "20bb0dc721040ae7d21dd9027a7a3cd4", "text": "The advent of cloud computing (CC) in recent years has attracted substantial interest from various institutions, especially higher education institutions, which wish to consider the advantages of its features. Many universities have migrated from traditional forms of teaching to electronic learning services, and they rely upon information and communication technology services. The usage of CC in educational environments provides many benefits, such as low-cost services for academics and students. The expanded use of CC comes with significant adoption challenges. Understanding the position of higher education institutions with respect to CC adoption is an essential research area. This paper investigated the current state of CC adoption in the higher education sector in order to enrich the research in this area of interest. Existing limitations and knowledge gaps in current empirical studies are identified. Moreover, suggested areas for further researches will be highlighted for the benefit of other researchers who are interesting in this topic. These researches encourage institutions of education especially in higher education to adopted cloud computing technology. Keywords—Cloud computing; education system; e-learning; information and communication technology (ICT)", "title": "" }, { "docid": "1963b3b1326fa4ed99ef39c9aaab0719", "text": "We take an ecological approach to studying social media use and its relation to mood among college students. We conducted a mixed-methods study of computer and phone logging with daily surveys and interviews to track college students' use of social media during all waking hours over seven days. Continual and infrequent checkers show different preferences of social media sites. Age differences also were found. Lower classmen tend to be heavier users and to primarily use Facebook, while upper classmen use social media less frequently and utilize sites other than Facebook more often. Factor analysis reveals that social media use clusters into patterns of content-sharing, text-based entertainment/discussion, relationships, and video consumption. The more constantly one checks social media daily, the less positive is one's mood. Our results suggest that students construct their own patterns of social media usage to meet their changing needs in their environment. The findings can inform further investigation into social media use as a benefit and/or distraction for students.", "title": "" }, { "docid": "4e5d2a871ea1cfed7188207b709766a5", "text": "key elements of orthodontic diagnosis and treatment planning over the last decade.1-3 Recent advances in technology now permit the clinician to measure dynamic lip-tooth relationships and incorporate that information into the orthodontic problem list and biomechanical plan. Digital videography is particularly useful in both smile analysis and in doctor/patient communication. Smile design is a multifactorial process, with clinical success determined by an understanding of the patient’s soft-tissue treatment limitations and the extent to which orthodontics or multidisciplinary treatment can satisfy the patient’s and orthodontist’s esthetic goals.", "title": "" }, { "docid": "9f21af3bc0955dcd9a05898f943f54ad", "text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.", "title": "" }, { "docid": "4731a95b14335a84f27993666b192bba", "text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.", "title": "" }, { "docid": "40d4716214b80ff944c552dfee09f5ec", "text": "Since the appearance of Android, its permission system was central to many studies of Android security. For a long time, the description of the architecture provided by Enck et al. in [31] was immutably used in various research papers. The introduction of highly anticipated runtime permissions in Android 6.0 forced us to reconsider this model. To our surprise, the permission system evolved with almost every release. After analysis of 16 Android versions, we can confirm that the modifications, especially introduced in Android 6.0, considerably impact the aptness of old conclusions and tools for newer releases. For instance, since Android 6.0 some signature permissions, previously granted only to apps signed with a platform certificate, can be granted to third-party apps even if they are signed with a non-platform certificate; many permissions considered before as threatening are now granted by default. In this paper, we review in detail the updated system, introduced changes, and their security implications. We highlight some bizarre behaviors, which may be of interest for developers and security researchers. We also found a number of bugs during our analysis, and provided patches to AOSP where possible.", "title": "" }, { "docid": "2de4de4a7b612fd8d87a40780acdd591", "text": "In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events like TLB misses, L1 and L2 cache misses, by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms make them perform well, which is confirmed by experimental results. *This work was carried out when the author was at the University of Amsterdam, supported by SION grant 612-23-431 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999.", "title": "" }, { "docid": "9464f2e308b5c8ab1f2fac1c008042c0", "text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.", "title": "" }, { "docid": "d97df185799408ae61ce2d210deec6e2", "text": "In e-commerce websites like Taobao, brand is playing a more important role in influencing users’ decision of click/purchase, partly because users are now attaching more importance to the quality of products and brand is an indicator of quality. However, existing ranking systems are not specifically designed to satisfy this kind of demand. Some design tricks may partially alleviate this problem, but still cannot provide satisfactory results or may create additional interaction cost. In this paper, we design the first brand-level ranking system to address this problem. The key challenge of this system is how to sufficiently exploit users’ rich behavior in e-commerce websites to rank the brands. In our solution, we firstly conduct the feature engineering specifically tailored for the personalized brand ranking problem and then rank the brands by an adapted Attention-GRU model containing three important modifications. Note that our proposed modifications can also apply to many other machine learning models on various tasks. We conduct a series of experiments to evaluate the effectiveness of our proposed ranking model and test the response to the brand-level ranking system from real users on a large-scale e-commerce platform, i.e. Taobao.", "title": "" }, { "docid": "2526915745dda9026836347292f79d12", "text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.", "title": "" }, { "docid": "917154ffa5d9108fd07782d1c9a183ba", "text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.", "title": "" }, { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "703f0baf67a1de0dfb03b3192327c4cf", "text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.", "title": "" }, { "docid": "2fbd1b2e25473affb40990195b26a88b", "text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.", "title": "" }, { "docid": "eb20856f797f35ea6eb05f4646e54f34", "text": "Malware in smartphones is growing at a signi cant rate. There are currently more than 250 million smartphone users in the world and this number is expected to grow in coming years [44]. In the past few years, smartphones have evolved from simple mobile phones into sophisticated computers. This evolution has enabled smartphone users to access and browse the Internet, to receive and send emails, SMS and MMS messages and to connect devices in order to exchange information. All of these features make the smartphone a useful tool in our daily lives, but at the same time they render it more vulnerable to attacks by malicious applications. Given that most users store sensitive information on their mobile phones, such as phone numbers, SMS messages, emails, pictures and videos, smartphones are a very appealing target for attackers and malware developers. The need to maintain security and data con dentiality on the Android platform makes the analysis of malware on this platform an urgent issue. We have based this report on previous approaches to the dynamic analysis of application behavior, and have adapted one approach in order to detect malware on the Android platform. The detector is embedded in a framework to collect traces from a number of real users and is based on crowdsourcing. Our framework has been tested by analyzing data collected at the central server using two types of data sets: data from arti cial malware created for test purposes and data from real malware found in the wild. The method used is shown to be an e ective means of isolating malware and alerting users of downloaded malware, which suggests that it has great potential for helping to stop the spread of detected malware to a larger community. Finally, the report will give a complete review of results for self written and real Android Malware applications that have been tested with the system. This thesis project shows that it is feasible to create an Android malware detection system with satisfactory results.", "title": "" }, { "docid": "a8abc8da0f2d5f8055c4ed6ea2294c6c", "text": "This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.", "title": "" } ]
scidocsrr
d15b94152661b013e935f44373d6bc23
The Good, The Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games
[ { "docid": "a52fce0b7419d745a85a2bba27b34378", "text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.", "title": "" } ]
[ { "docid": "bbeebb29c7220009c8d138dc46e8a6dd", "text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:", "title": "" }, { "docid": "45dbc5a3adacd0cc1374f456fb421ee9", "text": "The purpose of this article is to discuss current techniques used with poly-l-lactic acid to safely and effectively address changes observed in the aging face. Several important points deserve mention. First, this unique agent is not a filler but a stimulator of the host's own collagen, which then acts to volumize tissue in a gradual, progressive, and predictable manner. The technical differences between the use of biostimulatory agents and replacement fillers are simple and straightforward, but are critically important to the safe and successful use of these products and will be reviewed in detail. Second, in addition to gains in technical insights that have improved our understanding of how to use the product to best advantage, where to use the product to best advantage in facial filling has also improved with ever-evolving insights into the changes observed in the aging face. Finally, it is important to recognize that a patient's final outcome, and the amount of product and work it will take to get there, is a reflection of the quality of tissues with which they start. This is, of course, an issue of patient selection and not product selection.", "title": "" }, { "docid": "dd741d612ee466aecbb03f5e1be89b90", "text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.", "title": "" }, { "docid": "7f368ea27e9aa7035c8da7626c409740", "text": "The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.", "title": "" }, { "docid": "0d6a28cc55d52365986382f43c28c42c", "text": "Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.", "title": "" }, { "docid": "a91add591aacaa333e109d77576ba463", "text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.", "title": "" }, { "docid": "8c79eb51cfbc9872a818cf6467648693", "text": "A compact frequency-reconfigurable slot antenna for LTE (2.3 GHz), AMT-fixed service (4.5 GHz), and WLAN (5.8 GHz) applications is proposed in this letter. A U-shaped slot with short ends and an L-shaped slot with open ends are etched in the ground plane to realize dual-band operation. By inserting two p-i-n diodes inside the slots, easy reconfigurability of three frequency bands over a frequency ratio of 2.62:1 can be achieved. In order to reduce the cross polarization of the antenna, another L-shaped slot is introduced symmetrically. Compared to the conventional reconfigurable slot antenna, the size of the antenna is reduced by 32.5%. Simulated and measured results show that the antenna can switch between two single-band modes (2.3 and 5.8 GHz) and two dual-band modes (2.3/4.5 and 4.5/5.8 GHz). Also, stable radiation patterns are obtained.", "title": "" }, { "docid": "94631c7be7b2a992d006cd642dcc502c", "text": "This paper describes nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging’s effectiveness and scalability as applied to A* search, α β minimax game tree search, and the Davis-Putnam algorithm.", "title": "" }, { "docid": "0e5eb8191cea7d3a59f192aa32a214c4", "text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.", "title": "" }, { "docid": "54b094c7747c8ac0b1fbd1f93e78fd8e", "text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.", "title": "" }, { "docid": "0fd61e297560ebb8bcf1aafdf011ae67", "text": "Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future.", "title": "" }, { "docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33", "text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.", "title": "" }, { "docid": "8863a617cee49b578a3902d12841053b", "text": "N Engl J Med 2009;361:1475-85. Copyright © 2009 Massachusetts Medical Society. DNA damage has emerged as a major culprit in cancer and many diseases related to aging. The stability of the genome is supported by an intricate machinery of repair, damage tolerance, and checkpoint pathways that counteracts DNA damage. In addition, DNA damage and other stresses can trigger a highly conserved, anticancer, antiaging survival response that suppresses metabolism and growth and boosts defenses that maintain the integrity of the cell. Induction of the survival response may allow interventions that improve health and extend the life span. Recently, the first candidate for such interventions, rapamycin (also known as sirolimus), has been identified.1 Compromised repair systems in tumors also offer opportunities for intervention, making it possible to attack malignant cells in which maintenance of the genome has been weakened. Time-dependent accumulation of damage in cells and organs is associated with gradual functional decline and aging.2 The molecular basis of this phenomenon is unclear,3-5 whereas in cancer, DNA alterations are the major culprit. In this review, I present evidence that cancer and diseases of aging are two sides of the DNAdamage problem. An examination of the importance of DNA damage and the systems of genome maintenance in relation to aging is followed by an account of the derailment of genome guardian mechanisms in cancer and of how this cancerspecific phenomenon can be exploited for treatment.", "title": "" }, { "docid": "e9750bf1287847b6587ad28b19e78751", "text": "Biomedical engineering handles the organization and functioning of medical devices in the hospital. This is a strategic function of the hospital for its balance, development, and growth. This is a major focus in internal and external reports of the hospital. It's based on piloting of medical devices needs and the procedures of biomedical teams’ intervention. Multi-year projects of capital and operating expenditure in medical devices are planned as coherently as possible with the hospital's financial budgets. An information system is an essential tool for monitoring medical devices engineering and relationship with medical services.", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "1dcc48994fada1b46f7b294e08f2ed5d", "text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.", "title": "" }, { "docid": "222c51f079c785bb2aa64d2937e50ff0", "text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.", "title": "" }, { "docid": "cf999fc9b1a604dadfc720cf1bbfafdc", "text": "The characteristics of the extracellular polymeric substances (EPS) extracted with nine different extraction protocols from four different types of anaerobic granular sludge were studied. The efficiency of four physical (sonication, heating, cationic exchange resin (CER), and CER associated with sonication) and four chemical (ethylenediaminetetraacetic acid, ethanol, formaldehyde combined with heating, or NaOH) EPS extraction methods was compared to a control extraction protocols (i.e., centrifugation). The nucleic acid content and the protein/polysaccharide ratio of the EPS extracted show that the extraction does not induce abnormal cellular lysis. Chemical extraction protocols give the highest EPS extraction yields (calculated by the mass ratio between sludges and EPS dry weight (DW)). Infrared analyses as well as an extraction yield over 100% or organic carbon content over 1 g g−1 of DW revealed, nevertheless, a carry-over of the chemical extractants into the EPS extracts. The EPS of the anaerobic granular sludges investigated are predominantly composed of humic-like substances, proteins, and polysaccharides. The EPS content in each biochemical compound varies depending on the sludge type and extraction technique used. Some extraction techniques lead to a slightly preferential extraction of some EPS compounds, e.g., CER gives a higher protein yield.", "title": "" }, { "docid": "22719028c913aa4d0407352caf185d7a", "text": "Although the fact that genetic predisposition and environmental exposures interact to shape development and function of the human brain and, ultimately, the risk of psychiatric disorders has drawn wide interest, the corresponding molecular mechanisms have not yet been elucidated. We found that a functional polymorphism altering chromatin interaction between the transcription start site and long-range enhancers in the FK506 binding protein 5 (FKBP5) gene, an important regulator of the stress hormone system, increased the risk of developing stress-related psychiatric disorders in adulthood by allele-specific, childhood trauma–dependent DNA demethylation in functional glucocorticoid response elements of FKBP5. This demethylation was linked to increased stress-dependent gene transcription followed by a long-term dysregulation of the stress hormone system and a global effect on the function of immune cells and brain areas associated with stress regulation. This identification of molecular mechanisms of genotype-directed long-term environmental reactivity will be useful for designing more effective treatment strategies for stress-related disorders.", "title": "" }, { "docid": "44bd4ef644a18dc58a672eb91c873a98", "text": "Reactive oxygen species (ROS) contain one or more unpaired electrons and are formed as intermediates in a variety of normal biochemical reactions. However, when generated in excess amounts or not appropriately controlled, ROS initiate extensive cellular damage and tissue injury. ROS have been implicated in the progression of cancer, cardiovascular disease and neurodegenerative and neuroinflammatory disorders, such as multiple sclerosis (MS). In the last decade there has been a major interest in the involvement of ROS in MS pathogenesis and evidence is emerging that free radicals play a key role in various processes underlying MS pathology. To counteract ROS-mediated damage, the central nervous system is equipped with an intrinsic defense mechanism consisting of endogenous antioxidant enzymes. Here, we provide a comprehensive overview on the (sub)cellular origin of ROS during neuroinflammation as well as the detrimental effects of ROS in processing underlying MS lesion development and persistence. In addition, we will discuss clinical and experimental studies highlighting the therapeutic potential of antioxidant protection in the pathogenesis of MS.", "title": "" } ]
scidocsrr
4f7def054e9928937bb4e2a827dc1821
Rendering Subdivision Surfaces using Hardware Tessellation
[ { "docid": "5d9ed198f35312988a4b823c79ebb3a4", "text": "A quadtree algorithm is developed to triangulate deformed, intersecting parametric surfaces. The biggest problem with adaptive sampling is to guarantee that the triangulation is accurate within a given tolerance. A new method guarantees the accuracy of the triangulation, given a \"Lipschitz\" condition on the surface definition. The method constructs a hierarchical set of bounding volumes for the surface, useful for ray tracing and solid modeling operations. The task of adaptively sampling a surface is broken into two parts: a subdivision mechanism for recursively subdividing a surface, and a set of subdivision criteria for controlling the subdivision process.An adaptive sampling technique is said to be robust if it accurately represents the surface being sampled. A new type of quadtree, called a restricted quadtree, is more robust than the traditional unrestricted quadtree at adaptive sampling of parametric surfaces. Each sub-region in the quadtree is half the width of the previous region. The restricted quadtree requires that adjacent regions be the same width within a factor of two, while the traditional quadtree makes no restriction on neighbor width. Restricted surface quadtrees are effective at recursively sampling a parametric surface. Quadtree samples are concentrated in regions of high curvature, and along intersection boundaries, using several subdivision criteria. Silhouette subdivision improves the accuracy of the silhouette boundary when a viewing transformation is available at sampling time. The adaptive sampling method is more robust than uniform sampling, and can be more efficient at rendering deformed, intersecting parametric surfaces.", "title": "" }, { "docid": "9c2e89bad3ca7b7416042f95bf4f4396", "text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.", "title": "" } ]
[ { "docid": "90fa2211106f4a8e23c5a9c782f1790e", "text": "Page layout is dominant in many genres of physical documents, but it is frequently overlooked when texts are digitised. Its presence is largely determined by available technologies and skills: If no provision is made for creating, preserving, or describing layout, then it tends not to be created, preserved or described. However, I argue, the significance and utility of layout for readers is such that it will survive or re-emerge. I review how layout has been treated in the literature of graphic design and linguistics, and consider its role as a memory tool. I distinguish between fixed, flowed, fugitive and fragmented pages, determined not only by authorial intent but also by technical constraints. Finally, I describe graphic literacy as a component of functional literacy and suggest that corresponding graphic literacies are needed not only by readers, but by creators of documents and by the information management technologies that produce, deliver, and store them.", "title": "" }, { "docid": "327a681898f6f39ae98321643e06fba1", "text": "Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data. We show how to use AT for the tasks of entity recognition and relation extraction. In particular, we demonstrate that applying AT to a general purpose baseline model for jointly extracting entities and relations, allows improving the stateof-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).", "title": "" }, { "docid": "297d95a81658b3d50bf3aff5bcbf7047", "text": "In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians). The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimise the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity. To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VGGFace2, on MS-Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the IJB-A and IJB-B face recognition benchmarks, exceeding the previous state-of-the-art by a large margin. The dataset and models are publicly available.", "title": "" }, { "docid": "2b34bd00f114ddd7758bf4878edcab45", "text": "This paper considers an UWB balun optimized for a frequency band from 6 to 8.5 GHz. The balun provides a transition from unbalanced coplanar waveguide (CPW) to balanced coplanar stripline (CPS), which is suitable for feeding broadband coplanar antennas such as Vivaldi or bow-tie antennas. It is shown, that applying a solid ground plane under the CPS-to-CPS transition enables decreasing its area by a factor of 4.7. Such compact balun can be used for feeding uniplanar antennas, while significantly saving substrate area. Several transition configurations have been fabricated for single and double-layer configurations. They have been verified by comparison with results both from a full-wave electromagnetic (EM) simulation and experimental measurements.", "title": "" }, { "docid": "17ebf9f15291a3810d57771a8c669227", "text": "We describe preliminary work toward applying a goal reasoning agent for controlling an underwater vehicle in a partially observable, dynamic environment. In preparation for upcoming at-sea tests, our investigation focuses on a notional scenario wherein a autonomous underwater vehicle pursuing a survey goal unexpectedly detects the presence of a potentially hostile surface vessel. Simulations suggest that Goal Driven Autonomy can successfully reason about this scenario using only the limited computational resources typically available on underwater robotic platforms.", "title": "" }, { "docid": "e377063b8fe2d8a12b7c894e11a530e3", "text": "This paper aims at learning to score the figure skating sports videos. To address this task, we propose a deep architecture that includes two complementary components, i.e., Self-Attentive LSTM and Multi-scale Convolutional Skip LSTM. These two components can efficiently learn the local and global sequential information in each video. Furthermore, we present a large-scale figure skating sports video dataset – FisV dataset. This dataset includes 500 figure skating videos with the average length of 2 minutes and 50 seconds. Each video is annotated by two scores of nine different referees, i.e., Total Element Score(TES) and Total Program Component Score (PCS). Our proposed model is validated on FisV and MIT-skate datasets. The experimental results show the effectiveness of our models in learning to score the figure skating videos.", "title": "" }, { "docid": "550070e6bc24986fbc30c58e2171c227", "text": "Detection of anomalous trajectories is an important problem in the surveillance domain. Various algorithms based on learning of normal trajectory patterns have been proposed for this problem. Yet, these algorithms typically suffer from one or more limitations: They are not designed for sequential analysis of incomplete trajectories or online learning based on an incrementally updated training set. Moreover, they typically involve tuning of many parameters, including ad-hoc anomaly thresholds, and may therefore suffer from overfitting and poorly-calibrated alarm rates. In this article, we propose and investigate the Sequential Hausdorff Nearest-Neighbour Conformal Anomaly Detector (SHNN-CAD) for online learning and sequential anomaly detection in trajectories. This is a parameter-light algorithm that offers a well-founded approach to the calibration of the anomaly threshold. The discords algorithm, originally proposed by Keogh et al, is another parameter-light anomaly detection algorithm that has previously been shown to have good classification performance on a wide range of time-series datasets, including trajectory data. We implement and investigate the performance of SHNN-CAD and the discords algorithm on four different labelled trajectory datasets. The results show that SHNN-CAD achieves competitive classification performance with minimum parameter tuning during unsupervised online learning and sequential anomaly detection in trajectories.", "title": "" }, { "docid": "8085ffe018b09505464547242b2e3c21", "text": "Reducible flow graphs occur naturally in connection with flowcharts of computer programs and are used extensively for code optimization and global data flow analysis. In this paper we present an O(n2 log(n2/m)) algorithm for finding a maximum cycle packing in any weighted reducible flow graph with n vertices and m arcs; our algorithm heavily relies on Ramachandran's earlier work concerning reducible flow graphs.", "title": "" }, { "docid": "9593712906aa8272716a7fe5b482b91d", "text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.", "title": "" }, { "docid": "4805f0548cb458b7fad623c07ab7176d", "text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.", "title": "" }, { "docid": "3eb0ed6db613c94af266279bc38c1c28", "text": "We can better understand deep neural networks by identifying which features each of their neurons have learned to detect. To do so, researchers have created Deep Visualization techniques including activation maximization, which synthetically generates inputs (e.g. images) that maximally activate each neuron. A limitation of current techniques is that they assume each neuron detects only one type of feature, but we know that neurons can be multifaceted, in that they fire in response to many different types of features: for example, a grocery store class neuron must activate either for rows of produce or for a storefront. Previous activation maximization techniques constructed images without regard for the multiple different facets of a neuron, creating inappropriate mixes of colors, parts of objects, scales, orientations, etc. Here we introduce an algorithm that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron. We also introduce regularization methods that produce state-of-the-art results in terms of the interpretability of images obtained by activation maximization. By separately synthesizing each type of image a neuron fires in response to, the visualizations have more appropriate colors and coherent global structure. Multifaceted feature visualization thus provides a clearer and more comprehensive description of the role of each neuron. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Figure 1. Top: Visualizations of 8 types of images (feature facets) that activate the same “grocery store” class neuron. Bottom: Example training set images that activate the same neuron, and resemble the corresponding synthetic image in the top panel.", "title": "" }, { "docid": "23a329c63f9a778e3ec38c25fa59748a", "text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.", "title": "" }, { "docid": "77d2255e0a2d77ea8b2682937b73cc7d", "text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-", "title": "" }, { "docid": "4e9005d6f8e1ddcd8d160c66cc61ab41", "text": "Architectural tactics are decisions to efficiently solve quality attributes in software architecture. Security is a complex quality property due to its strong dependence on the application domain. However, the selection of security tactics in the definition of software architecture is guided informally and depends on the experience of the architect. This study presents a methodological approach to address and specify the quality attribute of security in architecture design applying security tactics. The approach is illustrated with a case study about a Tsunami Early Warning System.", "title": "" }, { "docid": "1f613fc1a2e7b29473cf0d3aa53cbb80", "text": "The visualization and analysis of dynamic social networks are challenging problems, demanding the simultaneous consideration of relational and temporal aspects. In order to follow the evolution of a network over time, we need to detect not only which nodes and which links change and when these changes occur, but also the impact they have on their neighbourhood and on the overall relational structure. Aiming to enhance the perception of structural changes at both the micro and the macro level, we introduce the change centrality metric. This novel metric, as well as a set of further metrics we derive from it, enable the pair wise comparison of subsequent states of an evolving network in a discrete-time domain. Demonstrating their exploitation to enrich visualizations, we show how these change metrics support the visual analysis of network dynamics.", "title": "" }, { "docid": "e0f88ddc85cfe4cdcbe761b85d2781d8", "text": "Intermodal Transportation Systems (ITS) are logistics networks integrating different transportation services, designed to move goods from origin to destination in a timely manner and using intermodal transportation means. This paper addresses the problem of the modeling and management of ITS at the operational level considering the impact that the new Information and Communication Technologies (ICT) tools can have on management and control of these systems. An effective ITS model at the operational level should focus on evaluating performance indices describing activities, resources and concurrency, by integrating information and financial flows. To this aim, ITS are regarded as discrete event systems and are modeled in a Petri net framework. We consider as a case study the ferry terminal of Trieste (Italy) that is described and simulated in different operative conditions characterized by different types of ICT solutions and information. The simulation results show that ICT have a huge potential for efficient real time management and operation of ITS, as well as an effective impact on the infrastructures.", "title": "" }, { "docid": "63b283d40abcccd17b4771535ac000e4", "text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.", "title": "" }, { "docid": "83926511ab8ce222f02e96820c8feb68", "text": "The grounding system design for GIS indoor substation is proposed in this paper. The design concept of equipotential ground grids in substation building as well as connection of GIS enclosures to main ground grid is described. The main ground grid design is performed according to IEEE Std. 80-2000. The real case study of grounding system design for 120 MVA, 69-24 kV distribution substation in MEA's power system is demonstrated.", "title": "" }, { "docid": "d18faf207a0dbccc030e5dcc202949ab", "text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.", "title": "" }, { "docid": "b2cb59b7464c3d7ead4fe3d70410a49c", "text": "X-ray measurements of the hip joints of children, with special reference to the acetabular index, suggest that the upper standard deviation of normal comprises the borderline to a critical zone where extreme values of normal and pathologic hips were found together. Above the double standard deviation only severe dysplasias were present. Investigations of the shaft-neck angle and the degree of anteversion including the wide standard deviation demonstrate that it is very difficult to determine where these angles become pathologic. It is more important to look for the relationship between femoral head and acetabulum. A new measurement--the Hip Value is based on measurements of the Idelberg- Frank angle, the Wiberg angle and MZ-distance of decentralization. By statistical methods, normal and pathological joints can be separated as follows: in adult Hip Values, between 6 and 15 indicate a normal joint form; values between 16 and 21 indicate a slight deformation and values of 22 and above are indications of a severe deformation, in children in the normal range the Hip Value reaches 14; values of 15 and up are pathological.", "title": "" } ]
scidocsrr
d7d808b8f227180a5b507e274d286096
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks
[ { "docid": "40b78c5378159e9cdf38275a773b8109", "text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.", "title": "" } ]
[ { "docid": "3e23069ba8a3ec3e4af942727c9273e9", "text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.", "title": "" }, { "docid": "990d811789fd5025d784a147facf9d07", "text": "1389-1286/$ see front matter 2012 Elsevier B.V http://dx.doi.org/10.1016/j.comnet.2012.06.016 ⇑ Corresponding author. Tel.: +216 96 819 500. E-mail addresses: olfa.gaddour@enis.rnu.tn (O isep.ipp.pt (A. Koubâa). IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) is a routing protocol specifically designed for Low power and Lossy Networks (LLN) compliant with the 6LoWPAN protocol. It currently shows up as an RFC proposed by the IETF ROLL working group. However, RPL has gained a lot of maturity and is attracting increasing interest in the research community. The absence of surveys about RPL motivates us to write this paper, with the objective to provide a quick introduction to RPL. In addition, we present the most relevant research efforts made around RPL routing protocol that pertain to its performance evaluation, implementation, experimentation, deployment and improvement. We also present an experimental performance evaluation of RPL for different network settings to understand the impact of the protocol attributes on the network behavior, namely in terms of convergence time, energy, packet loss and packet delay. Finally, we point out open research challenges on the RPL design. We believe that this survey will pave the way for interested researchers to understand its behavior and contributes for further relevant research works. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "76dd20f0464ff42badc5fd4381eed256", "text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.", "title": "" }, { "docid": "1b76b9d3f1326e8f6522f3cdd2c276bb", "text": "Classifier has been widely applied in machine learning, such as pattern recognition, medical diagnosis, credit scoring, banking and weather prediction. Because of the limited local storage at user side, data and classifier has to be outsourced to cloud for storing and computing. However, due to privacy concerns, it is important to preserve the confidentiality of data and classifier in cloud computing because the cloud servers are usually untrusted. In this work, we propose a framework for privacy-preserving outsourced classification in cloud computing (POCC). Using POCC, an evaluator can securely train a classification model over the data encrypted with different public keys, which are outsourced from the multiple data providers. We prove that our scheme is secure in the semi-honest model", "title": "" }, { "docid": "6d2d9de5db5b03a98a26efc8453588d8", "text": "In this paper we describe a system for use on a mobile robot that detects potential loop closures using both the visual and spatial appearance of the local scene. Loop closing is the act of correctly asserting that a vehicle has returned to a previously visited location. It is an important component in the search to make SLAM (Simultaneous Localization and Mapping) the reliable technology it should be. Paradoxically, it is hardest in the presence of substantial errors in vehicle pose estimates which is exactly when it is needed most. The contribution of this paper is to show how a principled and robust description of local spatial appearance (using laser rangefinder data) can be combined with a purely camera based system to produce superior performance. Individual spatial components (segments) of the local structure are described using a rotationally invariant shape descriptor and salient aspects thereof, and entropy as measure of their innate complexity. Comparisons between scenes are made using relative entropy and by examining the mutual arrangement of groups of segments. We show the inclusion of spatial information allows the resolution of ambiguities stemming from repetitive visual artifacts in urban settings. Importantly the method we present is entirely independent of the navigation and or mapping process and so is entirely unaffected by gross errors in pose estimation.", "title": "" }, { "docid": "4f7c1a965bcde03dedf1702c85b2ce77", "text": "Strategic managers are consistently faced with the decision of how to allocate scarce corporate resources in an environment that is placing more and more pressures on them. Recent scholarship in strategic management suggests that many of these pressures come directly from sources associated with social issues in management, rather than traditional arenas of strategic management. Using a greatly-improved source of data on corporate social performance, this paper reports the results of a rigorous study of the empirical linkages between financial and social performance. CSP is found to be positively associated with prior financial performance, supporting the theory that slack resource availability and CSP are positively related. CSP is also found to be positively associated with future financial performance, supporting the theory that good management and CSP are positively related. Post-print version of an article published in Strategic Management Journal 18(4): 303-319 (1997 April). doi: 10.1002/(SICI)1097-0266(199704)18:4<303::AID-SMJ869>3.0.CO;2-G", "title": "" }, { "docid": "02621546c67e6457f350d0192b616041", "text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.", "title": "" }, { "docid": "caaca962473382e40a08f90240cc88b6", "text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.", "title": "" }, { "docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3", "text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.", "title": "" }, { "docid": "7f9b7f50432d04968a1fb62855481eda", "text": "BACKGROUND/PURPOSE\nAccurate prenatal diagnosis of complex anatomic connections and associated anomalies has only been possible recently with the use of ultrasonography, echocardiography, and fetal magnetic resonance imaging (MRI). To assess the impact of improved antenatal diagnosis in the management and outcome of conjoined twins, the authors reviewed their experience with 14 cases.\n\n\nMETHODS\nA retrospective review of prenatally diagnosed conjoined twins referred to our institution from 1996 to present was conducted.\n\n\nRESULTS\nIn 14 sets of conjoined twins, there were 10 thoracoomphalopagus, 2 dicephalus tribrachius dipus, 1 ischiopagus, and 1 ischioomphalopagus. The earliest age at diagnosis was 9 weeks' gestation (range, 9 to 29; mean, 20). Prenatal imaging with ultrasonography, echocardiography, and ultrafast fetal MRI accurately defined the shared anatomy in all cases. Associated anomalies included cardiac malformations (11 of 14), congenital diaphragmatic hernia (4 of 14), abdominal wall defects (2 of 14), and imperforate anus (2 of 14). Three sets of twins underwent therapeutic abortion, 1 set of twins died in utero, and 10 were delivered via cesarean section at a mean gestational age of 34 weeks. There were 5 individual survivors in the series after separation (18%). In one case, in which a twin with a normal heart perfused the cotwin with a rudimentary heart, the ex utero intrapartum treatment procedure (EXIT) was utilized because of concern that the normal twin would suffer immediate cardiac decompensation at birth. This EXIT-to-separation strategy allowed prompt control of the airway and circulation before clamping the umbilical cord and optimized control over a potentially emergent situation, leading to survival of the normal cotwin. In 2 sets of twins in which each twin had a normal heart, tissue expanders were inserted before separation.\n\n\nCONCLUSIONS\nAdvances in prenatal diagnosis allow detailed, accurate evaluations of conjoined twins. Careful prenatal studies may uncover cases in which emergent separation at birth is lifesaving.", "title": "" }, { "docid": "58677916e11e6d5401b7396d117a517b", "text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.", "title": "" }, { "docid": "b8b4e582fbcc23a5a72cdaee1edade32", "text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.", "title": "" }, { "docid": "7856e64f16a6b57d8f8743d94ea9f743", "text": "Unconsciousness is a fundamental component of general anesthesia (GA), but anesthesiologists have no reliable ways to be certain that a patient is unconscious. To develop EEG signatures that track loss and recovery of consciousness under GA, we recorded high-density EEGs in humans during gradual induction of and emergence from unconsciousness with propofol. The subjects executed an auditory task at 4-s intervals consisting of interleaved verbal and click stimuli to identify loss and recovery of consciousness. During induction, subjects lost responsiveness to the less salient clicks before losing responsiveness to the more salient verbal stimuli; during emergence they recovered responsiveness to the verbal stimuli before recovering responsiveness to the clicks. The median frequency and bandwidth of the frontal EEG power tracked the probability of response to the verbal stimuli during the transitions in consciousness. Loss of consciousness was marked simultaneously by an increase in low-frequency EEG power (<1 Hz), the loss of spatially coherent occipital alpha oscillations (8-12 Hz), and the appearance of spatially coherent frontal alpha oscillations. These dynamics reversed with recovery of consciousness. The low-frequency phase modulated alpha amplitude in two distinct patterns. During profound unconsciousness, alpha amplitudes were maximal at low-frequency peaks, whereas during the transition into and out of unconsciousness, alpha amplitudes were maximal at low-frequency nadirs. This latter phase-amplitude relationship predicted recovery of consciousness. Our results provide insights into the mechanisms of propofol-induced unconsciousness, establish EEG signatures of this brain state that track transitions in consciousness precisely, and suggest strategies for monitoring the brain activity of patients receiving GA.", "title": "" }, { "docid": "ac62d57dac1a363275ddf989881d194a", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.08.010 ⇑ Corresponding author. Address: College of De University, 1239 Siping Road, Shanghai 200092, PR 6598 3432. E-mail addresses: huchenliu@foxmaill.com (H.-C (L. Liu), liunan@cqjtu.edu.cn (N. Liu). Failure mode and effects analysis (FMEA) is a risk assessment tool that mitigates potential failures in systems, processes, designs or services and has been used in a wide range of industries. The conventional risk priority number (RPN) method has been criticized to have many deficiencies and various risk priority models have been proposed in the literature to enhance the performance of FMEA. However, there has been no literature review on this topic. In this study, we reviewed 75 FMEA papers published between 1992 and 2012 in the international journals and categorized them according to the approaches used to overcome the limitations of the conventional RPN method. The intention of this review is to address the following three questions: (i) Which shortcomings attract the most attention? (ii) Which approaches are the most popular? (iii) Is there any inadequacy of the approaches? The answers to these questions will give an indication of current trends in research and the best direction for future research in order to further address the known deficiencies associated with the traditional FMEA. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4deb101ba94ef958cfe84610f2abccc4", "text": "Iris recognition is considered to be the most reliable and accurate biometric identification system available. Iris recognition system captures an image of an individual’s eye, the iris in the image is then meant for the further segmentation and normalization for extracting its feature. The performance of iris recognition systems depends on the process of segmentation. Segmentation is used for the localization of the correct iris region in the particular portion of an eye and it should be done accurately and correctly to remove the eyelids, eyelashes, reflection and pupil noises present in iris region. In our paper we are using Daughman’s Algorithm segmentation method for Iris Recognition. Iris images are selected from the CASIA Database, then the iris and pupil boundary are detected from rest of the eye image, removing the noises. The segmented iris region was normalized to minimize the dimensional inconsistencies between iris regions by using Daugman’s Rubber Sheet Model. Then the features of the iris were encoded by convolving the normalized iris region with 1D Log-Gabor filters and phase quantizing the output in order to produce a bit-wise biometric template. The Hamming distance was chosen as a matching metric, which gave the measure of how many bits disagreed between the templates of the iris. Index Terms —Daughman’s Algorithm, Daugman’s Rubber Sheet Model, Hamming Distance, Iris Recognition, segmentation.", "title": "" }, { "docid": "ca70bf377f8823c2ecb1cdd607c064ec", "text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.", "title": "" }, { "docid": "af6464d1e51cb59da7affc73977eed71", "text": "Recommender systems leverage both content and user interactions to generate recommendations that fit users' preferences. The recent surge of interest in deep learning presents new opportunities for exploiting these two sources of information. To recommend items we propose to first learn a user-independent high-dimensional semantic space in which items are positioned according to their substitutability, and then learn a user-specific transformation function to transform this space into a ranking according to the user's past preferences. An advantage of the proposed architecture is that it can be used to effectively recommend items using either content that describes the items or user-item ratings. We show that this approach significantly outperforms state-of-the-art recommender systems on the MovieLens 1M dataset.", "title": "" }, { "docid": "1b3c37f20cc341f50c7d12c425bc94af", "text": "Vertex is a Wrapper Induction system developed at Yahoo! for extracting structured records from template-based Web pages. To operate at Web scale, Vertex employs a host of novel algorithms for (1) Grouping similar structured pages in a Web site, (2) Picking the appropriate sample pages for wrapper inference, (3) Learning XPath-based extraction rules that are robust to variations in site structure, (4) Detecting site changes by monitoring sample pages, and (5) Optimizing editorial costs by reusing rules, etc. The system is deployed in production and currently extracts more than 250 million records from more than 200 Web sites. To the best of our knowledge, Vertex is the first system to do high-precision information extraction at Web scale.", "title": "" }, { "docid": "66638a2a66f6829f5b9ac72e4ace79ed", "text": "The Theory of Waste Management is a unified body of knowledge about waste and waste management, and it is founded on the expectation that waste management is to prevent waste to cause harm to human health and the environment and promote resource use optimization. Waste Management Theory is to be constructed under the paradigm of Industrial Ecology as Industrial Ecology is equally adaptable to incorporate waste minimization and/or resource use optimization goals and values.", "title": "" } ]
scidocsrr
64fb7af3a0293707c72f34f8fedd7fe5
Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining
[ { "docid": "18a524545090542af81e0a66df3a1395", "text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.", "title": "" }, { "docid": "6c9acb831bc8dc82198aef10761506be", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.", "title": "" } ]
[ { "docid": "4835360fec2ca50355d71f0d0ba76cbc", "text": "The surge in global population is compelling a shift toward smart agriculture practices. This coupled with the diminishing natural resources, limited availability of arable land, increase in unpredictable weather conditions makes food security a major concern for most countries. As a result, the use of Internet of Things (IoT) and data analytics (DA) are employed to enhance the operational efficiency and productivity in the agriculture sector. There is a paradigm shift from use of wireless sensor network (WSN) as a major driver of smart agriculture to the use of IoT and DA. The IoT integrates several existing technologies, such as WSN, radio frequency identification, cloud computing, middleware systems, and end-user applications. In this paper, several benefits and challenges of IoT have been identified. We present the IoT ecosystem and how the combination of IoT and DA is enabling smart agriculture. Furthermore, we provide future trends and opportunities which are categorized into technological innovations, application scenarios, business, and marketability.", "title": "" }, { "docid": "2f0eb4a361ff9f09bda4689a1f106ff2", "text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.", "title": "" }, { "docid": "cfb665d0ca71289a4da834584604250b", "text": "This work is motivated by the engineering task of achieving a near state-of-the-art face recognition on a minimal computing budget running on an embedded system. Our main technical contribution centers around a novel training method, called Multibatch, for similarity learning, i.e., for the task of generating an invariant “face signature” through training pairs of “same” and “not-same” face images. The Multibatch method first generates signatures for a mini-batch of k face images and then constructs an unbiased estimate of the full gradient by relying on all k2 k pairs from the mini-batch. We prove that the variance of the Multibatch estimator is bounded by O(1/k2), under some mild conditions. In contrast, the standard gradient estimator that relies on random k/2 pairs has a variance of order 1/k. The smaller variance of the Multibatch estimator significantly speeds up the convergence rate of stochastic gradient descent. Using the Multibatch method we train a deep convolutional neural network that achieves an accuracy of 98.2% on the LFW benchmark, while its prediction runtime takes only 30msec on a single ARM Cortex A9 core. Furthermore, the entire training process took only 12 hours on a single Titan X GPU.", "title": "" }, { "docid": "0c9a76222f885b95f965211e555e16cd", "text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.", "title": "" }, { "docid": "4667b31c7ee70f7bc3709fc40ec6140f", "text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.", "title": "" }, { "docid": "f88235f1056d66c5dc188fcf747bf570", "text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.", "title": "" }, { "docid": "c5928a67d0b8a6a1c40b7cad6ac03d16", "text": "Drug addiction represents a dramatic dysregulation of motivational circuits that is caused by a combination of exaggerated incentive salience and habit formation, reward deficits and stress surfeits, and compromised executive function in three stages. The rewarding effects of drugs of abuse, development of incentive salience, and development of drug-seeking habits in the binge/intoxication stage involve changes in dopamine and opioid peptides in the basal ganglia. The increases in negative emotional states and dysphoric and stress-like responses in the withdrawal/negative affect stage involve decreases in the function of the dopamine component of the reward system and recruitment of brain stress neurotransmitters, such as corticotropin-releasing factor and dynorphin, in the neurocircuitry of the extended amygdala. The craving and deficits in executive function in the so-called preoccupation/anticipation stage involve the dysregulation of key afferent projections from the prefrontal cortex and insula, including glutamate, to the basal ganglia and extended amygdala. Molecular genetic studies have identified transduction and transcription factors that act in neurocircuitry associated with the development and maintenance of addiction that might mediate initial vulnerability, maintenance, and relapse associated with addiction.", "title": "" }, { "docid": "10d14531df9190f5ffb217406fe8eb49", "text": "Web technology has enabled e-commerce. However, in our review of the literature, we found little research on how firms can better position themselves when adopting e-commerce for revenue generation. Drawing upon technology diffusion theory, we developed a conceptual model for assessing e-commerce adoption and migration, incorporating six factors unique to e-commerce. A series of propositions were then developed. Survey data of 1036 firms in a broad range of industries were collected and used to test our model. Our analysis based on multi-nominal logistic regression demonstrated that technology integration, web functionalities, web spending, and partner usage were significant adoption predictors. The model showed that these variables could successfully differentiate non-adopters from adopters. Further, the migration model demonstrated that web functionalities, web spending, and integration of externally oriented inter-organizational systems tend to be the most influential drivers in firms’ migration toward e-commerce, while firm size, partner usage, electronic data interchange (EDI) usage, and perceived obstacles were found to negatively affect ecommerce migration. This suggests that large firms, as well as those that have been relying on outsourcing or EDI, tended to be slow to migrate to the internet platform. # 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a2130c0316eea0fa510f381ea312b65e", "text": "A technique for building consistent 3D reconstructions from many views based on fitting a low rank matrix to a matrix with missing data is presented. Rank-four submatrices of minimal, or slightly larger, size are sampled and spans of their columns are combined to constrain a basis of the fitted matrix. The error minimized is expressed in terms of the original subspaces which leads to a better resistance to noise compared to previous methods. More than 90% of the missing data can be handled while finding an acceptable solution efficiently. Applications to 3D reconstruction using both affine and perspective camera models are shown. For the perspective model, a new linear method based on logarithms of positive depths from chirality is introduced to make the depths consistent with an overdetermined set of epipolar geometries. Results are shown for scenes and sequences of various types. Many images in open and closed sequences in narrow and wide base-line setups are reconstructed with reprojection errors around one pixel. It is shown that reconstructed cameras can be used to obtain dense reconstructions from epipolarly aligned images.", "title": "" }, { "docid": "3b4607a6b0135eba7c4bb0852b78dda9", "text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.", "title": "" }, { "docid": "b333be40febd422eae4ae0b84b8b9491", "text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.", "title": "" }, { "docid": "3c98c5bd1d9a6916ce5f6257b16c8701", "text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.", "title": "" }, { "docid": "80d457b352362d2b72acb26ca5b8a382", "text": "Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual experience alters the transition to native language specific phonetic discrimination is important both theoretically and from a practical standpoint. Using whole head magnetoencephalography (MEG), we examined brain responses to Spanish and English syllables in Spanish-English bilingual and English monolingual 11-month-old infants. Monolingual infants showed sensitivity to English, while bilingual infants were sensitive to both languages. Neural responses indicate that the dual sensitivity of the bilingual brain is achieved by a slower transition from acoustic to phonetic sound analysis, an adaptive and advantageous response to increased variability in language input. Bilingual neural responses extend into the prefrontal and orbitofrontal cortex, which may be related to their previously described bilingual advantage in executive function skills. A video abstract of this article can be viewed at: https://youtu.be/TAYhj-gekqw.", "title": "" }, { "docid": "60b876a2065587fc7f152d452605dc14", "text": "Fillers are frequently used in beautifying procedures. Despite major advancements of the chemical and biological features of injected materials, filler-related adverse events may occur, and can substantially impact the clinical outcome. Filler granulomas become manifest as visible grains, nodules, or papules around the site of the primary injection. Early recognition and proper treatment of filler-related complications is important because effective treatment options are available. In this report, we provide a comprehensive overview of the differential diagnosis and diagnostics and develop an algorithm of successful therapy regimens.", "title": "" }, { "docid": "28641a6621a31bf720586e4c5980645b", "text": "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [20] of temporal ensembling [8], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge [12]. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised", "title": "" }, { "docid": "663342554879c5464a7e1aff969339b7", "text": "Esthetic surgery of external female genitalia remains an uncommon procedure. This article describes a novel, de-epithelialized, labial rim flap technique for labia majora augmentation using de-epithelialized labia minora tissue otherwise to be excised as an adjunct to labia minora reduction. Ten patients were included in the study. The protruding segments of the labia minora were de-epithelialized with a fine scissors or scalpel instead of being excised, and a bulky section of subcutaneous tissue was obtained. Between the outer and inner surfaces of the labia minora, a flap with a subcutaneous pedicle was created in continuity with the de-epithelialized marginal tissue. A pocket was dissected in the labium majus, and the flap was transposed into the pocket to augment the labia majora. Mean patient age was 39.9 (±13.9) years, mean operation time was 60 min, and mean follow-up period was 14.5 (±3.4) months. There were no major complications (hematoma, wound dehiscence, infection) following surgery. No patient complained of postoperative difficulty with coitus or dyspareunia. All patients were satisfied with the final appearance. Several methods for labia minora reduction have been described. Auxiliary procedures are required with labia minora reduction for better results. Nevertheless, few authors have taken into account the final esthetic appearance of the whole female external genitalia. The described technique in this study is indicated primarily for mild atrophy of the labia majora with labia minora hypertrophy; the technique resulted in perfect patient satisfaction with no major complications or postoperative coital problems. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "29dab83f08d38702e09acec2f65346b3", "text": "This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for contentaware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outpues a retargeted image. Retargeting is performed through a shift reap, which is a pixet-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to r content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure tosses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.", "title": "" }, { "docid": "189d0b173f8a9e0b3deb21398955dc3c", "text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.", "title": "" }, { "docid": "d569902303b93274baf89527e666adc0", "text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" } ]
scidocsrr
c82dedb6f20d5cc6cc24882af6c00623
The REA-DSL: A Domain Specific Modeling Language for Business Models
[ { "docid": "8951e08b838294b61796717ad691378e", "text": "In order to open-up enterprise applications to e-businessand make them profitable for a communication with otherenterprise applications, a business model is needed showingthe business essentials of the e-commerce business caseto be developed. Currently there are two major businessmodeling techniques - e3-value and REA (Resource-Event-Agent). Whereas e3-value was designed for modeling valueexchanges within an e-business network of multiple businesspartners, the REA ontology assumes that, in the presence ofmoney and available prices, all multi-party collaborationsmay be decomposed into a set of corresponding binarycollaborations. This paper is a preliminary attempt to viewe3-value and REA used side-by-side to see where they cancomplement each other in coordinated use in the context ofmultiple-partner collaboration. A real life scenario from theprint media domain has been taken to proof our approach.", "title": "" }, { "docid": "3b778d25b51f444d5cdc327251e72999", "text": "must create the e-business information systems. This article presents a conceptual modeling approach to e-business—called e3-value—that is designed to help define how economic value is created and exchanged within a network of actors. Doing e-business well requires the formulation of an e-business model that will serve as the first step in requirements analysis for e-business information systems. The industry currently lacks adequate methods for formulating these kinds of requirements. Methods from the IT systems analysis domain generally have a strong technology bias and typically do not reflect business considerations very well. Meanwhile, approaches from the business sciences often lack the rigor needed for information systems development. A tighter integration of business and IT modeling would most certainly benefit the industry, because the integration of business and IT systems is already a distinct feature of e-business. This article shows some ways to achieve this kind of modeling integration. Our e3-value method is based on an economic valueoriented ontology that specifies what an e-business model is made of. In particular, it entails defining, deriving, and analyzing multi-enterprise relationships, e-business scenarios, and operations requirements in both qualitative and quantitative ways. Our e3-value approach offers distinct advantages over traditional nonintegrated modeling techniques. These advantages include better communication about the essentials of an e-business model and a more complete understanding of e-business operations and systems requirements through scenario analysis and quantification.1 The value viewpoint Requirements engineering entails information systems analysis from several distinct perspectives. Figure 1 shows what requirements perspectives are relevant to e-business design: the articulation of the economic value proposition (the e-business model), the layout of business processes that “operationalize” the e-business model, and the IT systems architecture that enables and supports the e-business processes. These perspectives provide a separation of concerns and help manage the complexity of requirements and design. Our emphasis on “the value viewpoint” is a distinguishing feature of our approach. There are already several good ways to represent business process and IT architectural models, but the industry lacks effective techniques to express and analyze the value viewpoint. We illustrate the use of the e3-value methodology with one of the e-business projects where we successfully applied our approach: provisioning a valueadded news service. A newspaper, which we call the Amsterdam Times for the sake of the example, wants to offer to all its subscribers the ability to read articles online. But the newspaper does not want to pass on any additional costs to its customers. The idea is to finance the expense by telephone connection revenues, which the reader must pay to set up a telephone connection for Internet connectivity. This can be achieved by two very different ebusiness models: the terminating model and the originating model. Figures 2 and 3 illustrate these modThis article presents", "title": "" } ]
[ { "docid": "80655e659e9cf0456595259f2969fe42", "text": "The induction motor equivalent circuit parameters are required for many performance and planning studies involving induction motors. These parameters are typically calculated from standardized motor performance tests, such as the no load, full load, and locked rotor tests. However, standardized test data is not typically available to the end user. Alternatively, the equivalent circuit parameters may be estimated based on published performance data for the motor. This paper presents an iterative method for estimating the induction motor equivalent circuit parameters using only the motor nameplate data.", "title": "" }, { "docid": "9714636fcadfc7778cb3d01a5fb20e46", "text": "In this paper, a method for controlling multivariable processes is presented. The controller design is divided into two parts: firstly, a decoupling matrix is designed in order to minimize the interaction effects. Then, the controller design is obtained for the process + decoupler block. For this purpose, an iterative numeric algorithm, proposed by same authors, is used. The aim is to meet the design specifications for each loop independently. This sequential design method for multivariable decoupling and multiloop PID controller is applied to several examples from literature. Decentralized PID controller design, specifications analysis and time response simulations has been made using the TITO tool, a set of m functions written in Matlab. It can be obtained in web page http://www.uco.es/~in2vasef. Copyrigth  2002 IFAC.", "title": "" }, { "docid": "a8fcb09ef7d0bb08f9869ca8aca4a5d7", "text": "Visuospatial working memory and its involvement in arithmetic were examined in two groups of 7- to 11-year-olds: one comprising children described by teachers as displaying symptoms of nonverbal learning difficulties (N = 21), the other a control group without learning disabilities (N = 21). The two groups were matched for verbal abilities, age, gender, and sociocultural level. The children were presented with a visuospatial working memory battery of recognition tests involving visual, spatial-sequential and spatial-simultaneous processes, and two arithmetic tasks (number ordering and written calculations). The two groups were found to differ on some spatial tasks but not in the visual working memory tasks. On the arithmetic tasks, the children with nonverbal learning difficulties made more errors than controls in calculation and were slower in number ordering. A discriminant function analysis confirmed the crucial role of spatial-sequential working memory in distinguishing between the two groups. Results are discussed with reference to spatial working memory and arithmetic difficulties in nonverbal learning disabilities. Implications for the relationship between visuospatial working memory and arithmetic are also considered.", "title": "" }, { "docid": "d805dc116db48b644b18e409dda3976e", "text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.", "title": "" }, { "docid": "e90755afe850d597ad7b3f4b7e590b66", "text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides", "title": "" }, { "docid": "fcfafe226a7ab72b5e18d524344400a3", "text": "This paper proposes several adjustments to the ISO 12233 slanted edge algorithm for estimating camera MTF. First, the Ridler-Calvard binary image segmentation method is used to find the line. Secondly, total least squares, rather than ordinary least squares, is used to compute the line parameters. Finally, the pixel values are projected in the reverse direction from the 1D array to the 2D image, rather than from the 2D image to the 1D array. Together, these changes yield an algorithm that exhibits significantly less variation than existing techniques when applied to real images. In particular, the proposed algorithm is largely invariant to the rotation angle of the edge as well as to the size of the image crop.", "title": "" }, { "docid": "737dda9cc50e5cf42523e6cadabf524e", "text": "Maintaining incisor alignment is an important goal of orthodontic retention and can only be guaranteed by placement of an intact, passive and permanent fixed retainer. Here we describe a reliable technique for bonding maxillary retainers and demonstrate all the steps necessary for both technician and clinician. The importance of increasing the surface roughness of the wire and teeth to be bonded, maintaining passivity of the retainer, especially during bonding, the use of a stiff wire and correct placement of the retainer are all discussed. Examples of adverse tooth movement from retainers with twisted and multistrand wires are shown.", "title": "" }, { "docid": "cde4d7457b949420ab90bdc894f40eb0", "text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.", "title": "" }, { "docid": "95612aa090b77fc660279c5f2886738d", "text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.", "title": "" }, { "docid": "f79e3a38e1120f5c3e9d9113bcb1f847", "text": "Classical numerical methods for solving partial differential equations suffer from the curse dimensionality mainly due to their reliance on meticulously generated spatio-temporal grids. Inspired by modern deep learning based techniques for solving forward and inverse problems associated with partial differential equations, we circumvent the tyranny of numerical discretization by devising an algorithm that is scalable to high-dimensions. In particular, we approximate the unknown solution by a deep neural network which essentially enables us to benefit from the merits of automatic differentiation. To train the aforementioned neural network we leverage the well-known connection between high-dimensional partial differential equations and forwardbackward stochastic differential equations. In fact, independent realizations of a standard Brownian motion will act as training data. We test the effectiveness of our approach for a couple of benchmark problems spanning a number of scientific domains including Black-Scholes-Barenblatt and HamiltonJacobi-Bellman equations, both in 100-dimensions.", "title": "" }, { "docid": "81aa60b514bb11efb9e137b8d13b92e8", "text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]", "title": "" }, { "docid": "fb15647d528df8b8613376066d9f5e68", "text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.", "title": "" }, { "docid": "8e5d286259c3b74b295e5bc1d867a5b2", "text": "We present an approach to multilingual grammar induction that exploits a phylogeny-structured model of parameter drift. Our method does not require any translated texts or token-level alignments. Instead, the phylogenetic prior couples languages at a parameter level. Joint induction in the multilingual model substantially outperforms independent learning, with larger gains both from more articulated phylogenies and as well as from increasing numbers of languages. Across eight languages, the multilingual approach gives error reductions over the standard monolingual DMV averaging 21.1% and reaching as high as 39%.", "title": "" }, { "docid": "eee0bc6ee06dce38efbc89659771f720", "text": "In a data center, an IO from an application to distributed storage traverses not only the network, but also several software stages with diverse functionality. This set of ordered stages is known as the storage or IO stack. Stages include caches, hypervisors, IO schedulers, file systems, and device drivers. Indeed, in a typical data center, the number of these stages is often larger than the number of network hops to the destination. Yet, while packet routing is fundamental to networks, no notion of IO routing exists on the storage stack. The path of an IO to an endpoint is predetermined and hard-coded. This forces IO with different needs (e.g., requiring different caching or replica selection) to flow through a one-size-fits-all IO stack structure, resulting in an ossified IO stack. This paper proposes sRoute, an architecture that provides a routing abstraction for the storage stack. sRoute comprises a centralized control plane and “sSwitches” on the data plane. The control plane sets the forwarding rules in each sSwitch to route IO requests at runtime based on application-specific policies. A key strength of our architecture is that it works with unmodified applications and VMs. This paper shows significant benefits of customized IO routing to data center tenants (e.g., a factor of ten for tail IO latency, more than 60% better throughput for a customized replication protocol and a factor of two in throughput for customized caching).", "title": "" }, { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "72cd858344bb5e0a878dd05fc8d07044", "text": "This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning frommembership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.", "title": "" }, { "docid": "579c8fffc3a3de878beb7319b01c2a4e", "text": "This paper introduces AVSWAT, a GIS based hydrological system linking the Soil and Water Assessment Tool (SWAT) water quality model and ArcView Geographic Information System software. The ?main purpose of AVSWAT is the combined assessment of nonpoint and point pollUtion loading at the watershed scale. The GIS component of the system, in addition to the traditional functions of data acquisition, storage, organization and display, implements advanced analytical methods with enhanced flexibility to improve the hydrological characterization of a study watershed. Intuitive user friendly graphic interfaces, also part of the GIS component, have been developed to provide an efficient interaction with the model and the associated parameter databases, and ultimately to simplify water quality assessments, while maintaining and increasing their reliability. This is also supported by SWAT, the core of the system, a complex, conceptual, hydrologic, continuous model with spatially explicit parameterization, building upon the United State Department of Agriculture (USDA) modeling experience. A step-by-step example application for a watershed in Central Texas is also included to verify the capability and illustrate some of the characteristics of the system which has been adopted by many users around the world. Address for correspondence: Mauro Di Luzio, Texas Agricultural Experiment Station, Blackland Research Center, Texas A&M University System, 720 East Blackland Road, Temple, TX 76502, USA. E-mail: diluzio@brc.tamus.edu © Blackwell Publishing Ltd. 2004. 9600 Garsington Road, Oxford 0X4 2DQ, UK and 350 Main Street, Maiden, MA 02148, USA. 114 M Di Luzio, R Srinivasan and I C Arnold", "title": "" }, { "docid": "cb66a49205c9914be88a7631ecc6c52a", "text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.", "title": "" }, { "docid": "216f97a97d240456d36ec765fd45739e", "text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.", "title": "" }, { "docid": "6bbbddca9ba258afb25d6e8af9bfec82", "text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.", "title": "" } ]
scidocsrr
badd6e36d6833cb2ccd3e2bf595608c7
Understanding User Revisions When Using Information Systems Features: Adaptive System Use and Triggers
[ { "docid": "586d89b6d45fd49f489f7fb40c87eb3a", "text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.", "title": "" } ]
[ { "docid": "d310779b1006f90719a0ece3cf2583b2", "text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", "title": "" }, { "docid": "7dcba854d1f138ab157a1b24176c2245", "text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.", "title": "" }, { "docid": "56d295950edf9503d89d891f7c1b361f", "text": "This paper describes the discipline of distance metric learning, a branch of machine learning that aims to learn distances from the data. Distance metric learning can be useful to improve similarity learning algorithms, and also has applications in dimensionality reduction. We describe the distance metric learning problem and analyze its main mathematical foundations. We discuss some of the most popular distance metric learning techniques used in classification, showing their goals and the required information to understand and use them. Furthermore, we present a Python package that collects a set of 17 distance metric learning techniques explained in this paper, with some experiments to evaluate the performance of the different algorithms. Finally, we discuss several possibilities of future work in this topic.", "title": "" }, { "docid": "d6dadf93c1a51be67f67a7fb8fdb9b68", "text": "Recent advances in quantum computing seem to suggest it is only a matter of time before general quantum computers become a reality. Because all widely used cryptographic constructions rely on the hardness of problems that can be solved efficiently using known quantum algorithms, quantum computers will have a profound impact on the field of cryptography. One such construction that will be broken by quantum computers is elliptic curve cryptography, which is used in blockchain applications such as bitcoin for digital signatures. Hash-based signature schemes are a promising post-quantum secure alternative, but existing schemes such as XMSS and SPHINCS are impractical for blockchain applications because of their performance characteristics. We construct a quantum secure signature scheme for use in blockchain technology by combining a hash-based one-time signature scheme with Naor-Yung chaining. By exploiting the structure and properties of a blockchain we achieve smaller signatures and better performance than existing hash-based signature schemes. The proposed scheme supports both one-time and many-time key pairs, and is designed to be easily adopted into existing blockchain implementations.", "title": "" }, { "docid": "5656c77061a3f678172ea01e226ede26", "text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.", "title": "" }, { "docid": "c8ef89eb90824b3d0f966c6f9b097d0b", "text": "Machine Learning and Inference methods have become ubiquitous in our attempt to induce more abstract representations of natural language text, visual scenes, and other messy, naturally occurring data, and support decisions that depend on it. However, learning models for these tasks is difficult partly because generating the necessary supervision signals for it is costly and does not scale. This paper describes several learning paradigms that are designed to alleviate the supervision bottleneck. It will illustrate their benefit in the context of multiple problems, all pertaining to inducing various levels of semantic representations from text. In particular, we discuss (i) Response Driven Learning of models, a learning protocol that supports inducing meaning representations simply by observing the model’s behavior in its environment, (ii) the exploitation of Incidental Supervision signals that exist in the data, independently of the task at hand, to learn models that identify and classify semantic predicates, and (iii) the use of weak supervision to combine simple models to support global decisions where joint supervision is not available. While these ideas are applicable in a range of Machine Learning driven fields, we will demonstrate it in the context of several natural language applications, from (cross-lingual) text classification, to Wikification, to semantic parsing.", "title": "" }, { "docid": "d3797817bcde1b16d35cc7efbc97953c", "text": "Biological time-keeping mechanisms have fascinated researchers since the movement of leaves with a daily rhythm was first described >270 years ago. The circadian clock confers a approximately 24-hour rhythm on a range of processes including leaf movements and the expression of some genes. Molecular mechanisms and components underlying clock function have been described in recent years for several animal and prokaryotic organisms, and those of plants are beginning to be characterized. The emerging model of the Arabidopsis clock has mechanistic parallels with the clocks of other model organisms, which consist of positive and negative feedback loops, but the molecular components appear to be unique to plants.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "f92a71e6094000ecf47ebd02bf4e5c4a", "text": "Exploding amounts of multimedia data increasingly require automatic indexing and classification, e.g. training classifiers to produce high-level features, or semantic concepts, chosen to represent image content, like car, person, etc. When changing the applied domain (i.e. from news domain to consumer home videos), the classifiers trained in one domain often perform poorly in the other domain due to changes in feature distributions. Additionally, classifiers trained on the new domain alone may suffer from too few positive training samples. Appropriately adapting data/models from an old domain to help classify data in a new domain is an important issue. In this work, we develop a new cross-domain SVM (CDSVM) algorithm for adapting previously learned support vectors from one domain to help classification in another domain. Better precision is obtained with almost no additional computational cost. Also, we give a comprehensive summary and comparative study of the state- of-the-art SVM-based cross-domain learning methods. Evaluation over the latest large-scale TRECVID benchmark data set shows that our CDSVM method can improve mean average precision over 36 concepts by 7.5%. For further performance gain, we also propose an intuitive selection criterion to determine which cross-domain learning method to use for each concept.", "title": "" }, { "docid": "ad6d21a36cc5500e4d8449525eae25ca", "text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.", "title": "" }, { "docid": "ad49388ef64fd63e0f318a0097019fe2", "text": "We present an experimental study of IEEE 802.11n (high throughput extension to the 802.11 standard) using commodity wireless hardware. 802.11n introduces a variety of new mechanisms including physical layer diversity techniques, channel bonding and frame aggregation mechanisms. Using measurements from our testbed, we analyze the fundamental characteristics of 802.11n links and quantify the gains of each mechanism under diverse scenarios. We show that the throughput of an 802.11n link can be severely degraded (up ≈85%) in presence of an 802.11g link. Our results also indicate that increased amount of interference due to wider channel bandwidths can lead to throughput degradation. To this end, we characterize the nature of interference due to variable channel widths in 802.11n and show that careful modeling of interference is imperative in such scenarios. Further, as a reappraisal of previous work, we evaluate the effectiveness of MAC level diversity in the presence of physical layer diversity mechanisms introduced by 802.11n.", "title": "" }, { "docid": "2fc024a732681aea0945430894351394", "text": "Despite the increasing popularity of cloud services, ensuring the security and availability of data, resources and services remains an ongoing research challenge. Distributed denial of service (DDoS) attacks are not a new threat, but remain a major security challenge and are a topic of ongoing research interest. Mitigating DDoS attack in cloud presents a new dimension to solutions proffered in traditional computing due to its architecture and features. This paper reviews 96 publications on DDoS attack and defense approaches in cloud computing published between January 2009 and December 2015, and discusses existing research trends. A taxonomy and a conceptual cloud DDoS mitigation framework based on change point detection are presented. Future research directions are also outlined.", "title": "" }, { "docid": "728ea68ac1a50ae2d1b280b40c480aec", "text": "This paper presents a new metaprogramming library, CL ARRAY, that offers multiplatform and generic multidimensional data containers for C++ specifically adapted for parallel programming. The CL ARRAY containers are built around a new formalism for representing the multidimensional nature of data as well as the semantics of multidimensional pointers and contiguous data structures. We also present OCL ARRAY VIEW, a concept based on metaprogrammed enveloped objects that supports multidimensional transformations and multidimensional iterators designed to simplify and formalize the interfacing process between OpenCL APIs, standard template library (STL) algorithms and CL ARRAY containers. Our results demonstrate improved performance and energy savings over the three most popular container libraries available to the developer community for use in the context of multi-linear algebraic applications.", "title": "" }, { "docid": "48a476d5100f2783455fabb6aa566eba", "text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].", "title": "" }, { "docid": "16b9d7602e45da0bb47017d1516c95bb", "text": "Intranet is a term used to describe the use of Internet technologies internally within an organization rather than externally to connect to the global Internet. While the advancement and the sophistication of the intranet is progressing tremendously, research on intranet utilization is still very scant. This paper is an attempt to provide a conceptual understanding of the intranet utilization and the corresponding antecedents and impacts through the proposed conceptual model. Based on several research frameworks built through past research, the authors attempt to propose a framework for studying intranet utilization that is based on three constructs i.e. mode of utilizations, decision support and knowledge sharing. Three groups of antecedent variables namely intranet, organizational and individual characteristics are explored to determine their possible contribution to intranet utilization. In addition, the impacts of intranet utilization are also examined in terms of task productivity, task innovation and individual sense of accomplishments. Based on the proposed model, several propositions are formulated as a basis for the study that will follow.", "title": "" }, { "docid": "cff32690c2421b2ad94dea33f5e4479d", "text": "Heavy ion single-event effect (SEE) measurements on Xilinx Zynq-7000 are reported. Heavy ion susceptibility to Single-Event latchup (SEL), single event upsets (SEUs) of BRAM, configuration bits of FPGA and on chip memory (OCM) of the processor were investigated.", "title": "" }, { "docid": "418ebc0424128ec1a89d5e5292872124", "text": "Apocyni Veneti Folium (AVF) is a kind of staple traditional Chinese medicine with vast clinical consumption because of its positive effects. However, due to the habitats and adulterants, its quality is uneven. To control the quality of this medicinal herb, in this study, the quality of AVF was evaluated based on simultaneous determination of multiple bioactive constituents combined with multivariate statistical analysis. A reliable method based on ultra-fast liquid chromatography tandem triple quadrupole mass spectrometry (UFLC-QTRAP-MS/MS) was developed for the simultaneous determination of a total of 43 constituents, including 15 flavonoids, 6 organic acids, 13 amino acids, and 9 nucleosides in 41 Luobumaye samples from different habitats and commercial herbs. Furthermore, according to the contents of these 43 constituents, principal component analysis (PCA) was employed to classify and distinguish between AVF and its adulterants, leaves of Poacynum hendersonii (PHF), and gray relational analysis (GRA) was performed to evaluate the quality of the samples. The proposed method was successfully applied to the comprehensive quality evaluation of AVF, and all results demonstrated that the quality of AVF was higher than the PHF. This study will provide comprehensive information necessary for the quality control of AVF.", "title": "" }, { "docid": "46980b89e76bc39bf125f63ed9781628", "text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.", "title": "" }, { "docid": "25eea5205d1f8beaa8c4a857da5714bc", "text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.", "title": "" }, { "docid": "81f474cbd140935d93faf47af87a205b", "text": "The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.", "title": "" } ]
scidocsrr
9d4d1861a00d94986f1fed4bbbe06218
Analyzing User Activities, Demographics, Social Network Structure and User-Generated Content on Instagram
[ { "docid": "349f85e6ffd66d6a1dd9d9c6925d00bc", "text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.", "title": "" } ]
[ { "docid": "733e379ecaab79ac328f55ccc2384b69", "text": "Introduction Since Beijing 1995, gender mainstreaming has heralded the beginning of a renewed effort to address what is seen as one of the roots of gender inequality: the genderedness of systems, procedures and organizations. In the definition of the Council of Europe, gender mainstreaming is the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policymaking. All member states and some candidate states of the European Union have started to implement gender mainstreaming. The 1997 Treaty of Amsterdam places equality between women and men among the explicit tasks of the European Union and obliges the EU to promote gender equality in all its tasks and activities. The Gender Mainstreaming approach that has been legitimated by this Treaty is backed by legislation and by positive action in favour of women (or the “under-represented sex”). Gender equality policies have not only been part and parcel of modernising action in the European Union, but can be expected to continue to be so (Rossili 2000). With regard to gender inequality, the EU has both a formal EU problem definition at the present time, and a formalised set of EU strategies. Problems in the implementation of gender equality policies abound, at both national and EU level. To give just one example, it took the Netherlands – usually very supportive of the EU –14 years to implement article 119 on Equal Pay (Van der Vleuten 2001). Moreover, it has been documented that overall EU action has run counter to its goal of gender equality. Overall EU action has weakened women’s social rights more seriously than men’s (Rossili 2000). The introduction of Gender Mainstreaming, the incorporation of gender and women’s concerns in all regular policymaking is meant to address precisely this problem of a contradiction between specific gender policies and regular EU policies. Yet, in the case of the Structural Funds, for instance, Gender Mainstreaming has been used to further reduce existing funds and incentives for gender equality (Rossili 2000). Against this backdrop, this paper will present an approach at studying divergences in policy frames around gender equality as one of the elements connected to implementation problems: the MAGEEQ project.", "title": "" }, { "docid": "2e864dcde57ea1716847f47977af0140", "text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.", "title": "" }, { "docid": "ce402c150d74cbc954378ea7927dfa71", "text": "The study investigated the influence of extrinsic and intrinsic motivation on employees performance. Subjects for the study consisted of one hundred workers of Flour Mills of Nigeria PLC, Lagos. Data for the study were gathered through the administration of a self-designed questionnaire. The data collected were subjected to appropriate statistical analysis using Pearson Product Moment Correlation Coefficient, and all the findings were tested at 0.05 level of significance. The result obtained from the analysis showed that there existed relationship between extrinsic motivation and the performance of employees, while no relationship existed between intrinsic motivation and employees performance. On the basis of these findings, implications of the findings for future study were stated.", "title": "" }, { "docid": "b594a4fafc37a18773b1144dfdbb965d", "text": "Deep generative modelling for robust human body analysis is an emerging problem with many interesting applications, since it enables analysis-by-synthesis and unsupervised learning. However, the latent space learned by such models is typically not human-interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised variational auto-encoder approach and present a deep generative model for human body analysis where the pose and appearance are disentangled in the latent space, allowing for pose estimation. Such a disentanglement allows independent manipulation of pose and appearance and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, the ability to train in a semi-supervised setting relaxes the need for labelled data. We demonstrate the merits of our generative model on the Human3.6M and ChictopiaPlus datasets.", "title": "" }, { "docid": "20dd21215f9dc6bd125b2af53500614d", "text": "In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system.", "title": "" }, { "docid": "9f184ba1cfe36fde398f896b1ce93745", "text": "http://dx.doi.org/10.1016/j.compag.2015.08.011 0168-1699/ 2015 Elsevier B.V. All rights reserved. ⇑ Corresponding author at: School of Information Technology, Indian Institute of Technology Kharagpur, India. E-mail addresses: tojha@sit.iitkgp.ernet.in (T. Ojha), smisra@sit.iitkgp.ernet.in (S. Misra), nsr@agfe.iitkgp.ernet.in (N.S. Raghuwanshi). Tamoghna Ojha a,b,⇑, Sudip Misra , Narendra Singh Raghuwanshi b", "title": "" }, { "docid": "d1357b2e247d521000169dce16f182ee", "text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.", "title": "" }, { "docid": "28b70047cb41f765504f8f9b54456cc4", "text": "BACKGROUND\nAccelerometers are widely used to measure sedentary time, physical activity, physical activity energy expenditure (PAEE), and sleep-related behaviors, with the ActiGraph being the most frequently used brand by researchers. However, data collection and processing criteria have evolved in a myriad of ways out of the need to answer unique research questions; as a result there is no consensus.\n\n\nOBJECTIVES\nThe purpose of this review was to: (1) compile and classify existing studies assessing sedentary time, physical activity, energy expenditure, or sleep using the ActiGraph GT3X/+ through data collection and processing criteria to improve data comparability and (2) review data collection and processing criteria when using GT3X/+ and provide age-specific practical considerations based on the validation/calibration studies identified.\n\n\nMETHODS\nTwo independent researchers conducted the search in PubMed and Web of Science. We included all original studies in which the GT3X/+ was used in laboratory, controlled, or free-living conditions published from 1 January 2010 to the 31 December 2015.\n\n\nRESULTS\nThe present systematic review provides key information about the following data collection and processing criteria: placement, sampling frequency, filter, epoch length, non-wear-time, what constitutes a valid day and a valid week, cut-points for sedentary time and physical activity intensity classification, and algorithms to estimate PAEE and sleep-related behaviors. The information is organized by age group, since criteria are usually age-specific.\n\n\nCONCLUSION\nThis review will help researchers and practitioners to make better decisions before (i.e., device placement and sampling frequency) and after (i.e., data processing criteria) data collection using the GT3X/+ accelerometer, in order to obtain more valid and comparable data.\n\n\nPROSPERO REGISTRATION NUMBER\nCRD42016039991.", "title": "" }, { "docid": "a45294bcd622c526be47975abe4e6d66", "text": "Identification of gene locations in a DNA sequence is one of the important problems in the area of genomics. Nucleotides in exons of a DNA sequence show f = 1/3 periodicity. The period-3 property in exons of eukaryotic gene sequences enables signal processing based time-domain and frequency-domain methods to predict these regions. Identification of the period-3 regions helps in predicting the gene locations within the billions long DNA sequence of eukaryotic cells. Existing non-parametric filtering techniques are less effective in detecting small exons. This paper presents a harmonic suppression filter and parametric minimum variance spectrum estimation technique for gene prediction. We show that both the filtering techniques are able to detect smaller exon regions and adaptive MV filter minimizes the power in introns (non-coding regions) giving more suppression to the intron regions. Furthermore, 2-simplex mapping is used to reduce the computational complexity.", "title": "" }, { "docid": "7f84e215df3d908249bde3be7f2b3cab", "text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.", "title": "" }, { "docid": "b6cd222b0bc5c2839c66cdf4538d7264", "text": "Stereoscopic 3D (S3D) movies have become widely popular in the movie theaters, but the adoption of S3D at home is low even though most TV sets support S3D. It is widely believed that S3D with glasses is not the right approach for the home. A much more appealing approach is to use automulti-scopic displays that provide a glasses-free 3D experience to multiple viewers. A technical challenge is the lack of native multiview content that is required to deliver a proper view of the scene for every viewpoint. Our approach takes advantage of the abundance of stereoscopic 3D movies. We propose a real-time system that can convert stereoscopic video to a high-quality multiview video that can be directly fed to automultiscopic displays. Our algorithm uses a wavelet-based decomposition of stereoscopic images with per-wavelet disparity estimation. A key to our solution lies in combining Lagrangian and Eulerian approaches for both the disparity estimation and novel view synthesis, which leverages the complementary advantages of both techniques. The solution preserves all the features of Eulerian methods, e.g., subpixel accuracy, high performance, robustness to ambiguous depth cases, and easy integration of inter-view aliasing while maintaining the advantages of Lagrangian approaches, e.g., robustness to large disparities and possibility of performing non-trivial disparity manipulations through both view extrapolation and interpolation. The method achieves real-time performance on current GPUs. Its design also enables an easy hardware implementation that is demonstrated using a field-programmable gate array. We analyze the visual quality and robustness of our technique on a number of synthetic and real-world examples. We also perform a user experiment which demonstrates benefits of the technique when compared to existing solutions.", "title": "" }, { "docid": "e793b233039c9cb105fa311fa08312cd", "text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.", "title": "" }, { "docid": "1efcace33a3a6ad7805f765edfafb6f4", "text": "Recently, new configurations of robot legs using a parallel mechanism have been studied for improving the locomotion ability in four-legged robots. However, it is difficult to obtain full dynamics of the parallel-mechanism robot legs because this mechanism has many links and complex constraint conditions, which make it difficult to design a modelbased controller. Here, we propose the simplified modeling of a parallel-mechanism robot leg with two degrees-of-freedom (2DOF), which can be used instead of complex full dynamics for model-based control. The new modeling approach considers the robot leg as a 2DOF Revolute and Prismatic(RP) manipulator, inspired by the actuation mechanism of robot legs, for easily designing a nominal model of the controller. To verify the effectiveness of the new modeling approach experimentally, we conducted dynamic simulations using a commercial multi-dynamics simulator. The simulation results confirmed that the proposed modeling approach could be an alternative modeling method for parallel-mechanism robot legs.", "title": "" }, { "docid": "e9c4877bca5f1bfe51f97818cc4714fa", "text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using", "title": "" }, { "docid": "929534782eaaa41186a1138b0439cdca", "text": "How do observers respond when the actions of one individual inflict harm on another? The primary reaction to carelessly inflicted harm is to seek restitution; the offender is judged to owe compensation to the harmed individual. The primary reaction to harm inflicted intentionally is moral outrage producing a desire for retribution; the harm-doer must be punished. Reckless conduct, an intermediate case, provokes reactions that involve elements of both careless and intentional harm. The moral outrage felt by those who witness transgressions is a product of both cognitive interpretations of the event and emotional reactions to it. Theory about the exact nature of the emotional reactions is considered, along with suggestions for directions for future research.", "title": "" }, { "docid": "c75ee3e700806bcb098f6e1c05fdecfc", "text": "This study examines patterns of cellular phone adoption and usage in an urban setting. One hundred and seventy-six cellular telephone users were surveyed abou their patterns of usage, demographic and socioeconomic characteristics, perceptions about the technology, and their motivations to use cellular services. The results of this study confirm that users' perceptions are significantly associated with their motivation to use cellular phones. Specifically, perceived ease of use was found to have significant effects on users' extrinsic and intrinsic motivations; apprehensiveness about cellular technology had a negative effect on intrinsic motivations. Implications of these findings for practice and research are examined.", "title": "" }, { "docid": "627e4d3c2dfb8233f0e345410064f6d0", "text": "Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering,termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (K-means, partitional SingleLink, spectral clustering), and its performance is comparable to the state-of-the-art algorithms for data clustering with side information.", "title": "" }, { "docid": "9b791932b6f2cdbbf0c1680b9a610614", "text": "To survive in today’s global marketplace, businesses need to be able to deliver products on time, maintain market credibility and introduce new products and services faster than competitors. This is especially crucial to the Smalland Medium-sized Enterprises (SMEs). Since the emergence of the Internet, it has allowed SMEs to compete effectively and efficiently in both domestic and international market. Unfortunately, such leverage is often impeded by the resistance and mismanagement of SMEs to adopt Electronic Commerce (EC) proficiently. Consequently, this research aims to investigate how SMEs can adopt and implement EC successfully to achieve competitive advantage. Building on an examination of current technology diffusion literature, a model of EC diffusion has been developed. It investigates the factors that influence SMEs in the adoption of EC, followed by an examination in the diffusion process, which SMEs adopt to integrate EC into their business systems.", "title": "" }, { "docid": "7d7ea6239106f614f892701e527122e2", "text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.", "title": "" }, { "docid": "e67986714c6bda56c03de25168c51e6b", "text": "With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.", "title": "" } ]
scidocsrr
cec3ee6652ec779e0f0dfd20b8ab828d
Effective Exploration for MAVs Based on the Expected Information Gain
[ { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" } ]
[ { "docid": "fcd3eb613db484d7d2bd00a03e5192bc", "text": "A design methodology by including the finite PSR of the error amplifier to improve the low frequency PSR of the Low dropout regulator with improved voltage subtractor circuit is proposed. The gm/ID method based on exploiting the all regions of operation of the MOS transistor is utilized for the design of LDO regulator. The PSR of the LDO regulator is better than -50dB up to 10MHz frequency for the load currents up to 20mA with 0.15V drop-out voltage. A comparison is made between different schematics of the LDO regulator and proposed methodology for the LDO regulator with improved voltage subtractor circuit. Low frequency PSR of the regulator can be significantly improved with proposed methodology.", "title": "" }, { "docid": "741efb8046bb888b944768784b87d70a", "text": "Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.", "title": "" }, { "docid": "7ea777ccae8984c26317876d804c323c", "text": "The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.", "title": "" }, { "docid": "0f5c1d2503a2845e409d325b085bf600", "text": "We present Accel, a novel semantic video segmentation system that achieves high accuracy at low inference cost by combining the predictions of two network branches: (1) a reference branch that extracts high-detail features on a reference keyframe, and warps these features forward using frame-to-frame optical flow estimates, and (2) an update branch that computes features of adjustable quality on the current frame, performing a temporal update at each video frame. The modularity of the update branch, where feature subnetworks of varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables operation over a new, state-of-the-art accuracy-throughput trade-off spectrum. Over this curve, Accel models achieve both higher accuracy and faster inference times than the closest comparable single-frame segmentation networks. In general, Accel significantly outperforms previous work on efficient semantic video segmentation, correcting warping-related error that compounds on datasets with complex dynamics. Accel is end-to-end trainable and highly modular: the reference network, the optical flow network, and the update network can each be selected independently, depending on application requirements, and then jointly fine-tuned. The result is a robust, general system for fast, high-accuracy semantic segmentation on video.", "title": "" }, { "docid": "798f8c412ac3fbe1ab1b867bc8ce68d0", "text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.", "title": "" }, { "docid": "7eb4e5b88843d81390c14aae2a90c30b", "text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.", "title": "" }, { "docid": "1090297224c76a5a2c4ade47cb932dba", "text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "c71d229d69d79747eca7e87e342ba6d8", "text": "This paper proposes a road detection approach based solely on dense 3D-LIDAR data. The approach is built up of four stages: (1) 3D-LIDAR points are projected to a 2D reference plane; then, (2) dense height maps are computed using an upsampling method; (3) applying a sliding-window technique in the upsampled maps, probability distributions of neighbouring regions are compared according to a similarity measure; finally, (4) morphological operations are used to enhance performance against disturbances. Our detection approach does not depend on road marks, thus it is suitable for applications on rural areas and inner-city with unmarked roads. Experiments have been carried out in a wide variety of scenarios using the recent KITTI-ROAD benchmark, obtaining promising results when compared to other state-of-art approaches.", "title": "" }, { "docid": "e84699f276c807eb7fddb49d61bd8ae8", "text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.", "title": "" }, { "docid": "c9e9807acbc69afd9f6a67d9bda0d535", "text": "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.", "title": "" }, { "docid": "6bea1d7242fc23ec8f462b1c8478f2c1", "text": "Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews.", "title": "" }, { "docid": "43fa16b19c373e2d339f45c71a0a2c22", "text": "McKusick-Kaufman syndrome is a human developmental anomaly syndrome comprising mesoaxial or postaxial polydactyly, congenital heart disease and hydrometrocolpos. This syndrome is diagnosed most frequently in the Old Order Amish population and is inherited in an autosomal recessive pattern with reduced penetrance and variable expressivity. Homozygosity mapping and linkage analyses were conducted using two pedigrees derived from a larger pedigree published in 1978. The PedHunter software query system was used on the Amish Genealogy Database to correct the previous pedigree, derive a minimal pedigree connecting those affected sibships that are in the database and determine the most recent common ancestors of the affected persons. Whole genome short tandem repeat polymorphism (STRP) screening showed homozygosity in 20p12, between D20S162 and D20S894 , an area that includes the Alagille syndrome critical region. The peak two-point LOD score was 3.33, and the peak three-point LOD score was 5.21. The physical map of this region has been defined, and additional polymorphic markers have been isolated. The region includes several genes and expressed sequence tags (ESTs), including the jagged1 gene that recently has been shown to be haploinsufficient in the Alagille syndrome. Sequencing of jagged1 in two unrelated individuals affected with McKusick-Kaufman syndrome has not revealed any disease-causing mutations.", "title": "" }, { "docid": "44d4114280e3ab9f6bfa0f0b347114b7", "text": "Dozens of Electronic Control Units (ECUs) can be found on modern vehicles for safety and driving assistance. These ECUs also introduce new security vulnerabilities as recent attacks have been reported by plugging the in-vehicle system or through wireless access. In this paper, we focus on the security of the Controller Area Network (CAN), which is a standard for communication among ECUs. CAN bus by design does not have sufficient security features to protect it from insider or outsider attacks. Intrusion detection system (IDS) is one of the most effective ways to enhance vehicle security on the insecure CAN bus protocol. We propose a new IDS based on the entropy of the identifier bits in CAN messages. The key observation is that all the known CAN message injection attacks need to alter the CAN ID bits and analyzing the entropy of such bits can be an effective way to detect those attacks. We collected real CAN messages from a vehicle (2016 Ford Fusion) and performed simulated message injection attacks. The experimental results showed that our entropy based IDS can successfully detect all the injection attacks without disrupting the communication on CAN.", "title": "" }, { "docid": "a48b7c679008235568d3d431081277b4", "text": "This paper discusses the security aspects of a registration protocol in a mobile satellite communication system. We propose a new mobile user authentication and data encryption scheme for mobile satellite communication systems. The scheme can remedy a replay attack.", "title": "" }, { "docid": "9a1151e45740dfa663172478259b77b6", "text": "Every year, several new ontology matchers are proposed in the literature, each one using a different heuristic, which implies in different performances according to the characteristics of the ontologies. An ontology metamatcher consists of an algorithm that combines several approaches in order to obtain better results in different scenarios. To achieve this goal, it is necessary to define a criterion for the use of matchers. We presented in this work an ontology meta-matcher that combines several ontology matchers making use of the evolutionary meta-heuristic prey-predator as a means of parameterization of the same. Resumo. Todo ano, diversos novos alinhadores de ontologias são propostos na literatura, cada um utilizando uma heurı́stica diferente, o que implica em desempenhos distintos de acordo com as caracterı́sticas das ontologias. Um meta-alinhador consiste de um algoritmo que combina diversas abordagens a fim de obter melhores resultados em diferentes cenários. Para atingir esse objetivo, é necessária a definição de um critério para melhor uso de alinhadores. Neste trabalho, é apresentado um meta-alinhador de ontologias que combina vários alinhadores através da meta-heurı́stica evolutiva presa-predador como meio de parametrização das mesmas.", "title": "" }, { "docid": "a32c635c1f4f4118da20cee6ffb5c1ea", "text": "We analyzed the influence of education and of culture on the neuropsychological profile of an indigenous and a nonindigenous population. The sample included 27 individuals divided into four groups: (a) seven illiterate Maya indigenous participants, (b) six illiterate Pame indigenous participants, (c) seven nonindigenous participants with no education, and (d) seven Maya indigenous participants with 1 to 4 years of education . A brief neuropsychological test battery developed and standardized in Mexico was individually administered. Results demonstrated differential effects for both variables. Both groups of indigenous participants (Maya and Pame) obtained higher scores in visuospatial tasks, and the level of education had significant effects on working and verbal memory. Our data suggested that culture dictates what it is important for survival and that education could be considered as a type of subculture that facilitates the development of certain skills.", "title": "" }, { "docid": "c460660e6ea1cc38f4864fe4696d3a07", "text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.", "title": "" }, { "docid": "a25fa0c0889b62b70bf95c16f9966cc4", "text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.", "title": "" }, { "docid": "273abcab379d49680db121022fba3e8f", "text": "Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEGbased Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valence and Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods. © 2015 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
3d0d9de8d64948a55b956e46c69dca01
Role of video games in improving health-related outcomes: a systematic review.
[ { "docid": "f9c37f460fc0a4e7af577ab2cbe7045b", "text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.", "title": "" } ]
[ { "docid": "4a240b05fbb665596115841d238a483b", "text": "BACKGROUND\nAttachment theory is one of the most important achievements of contemporary psychology. Role of medical students in the community health is important, so we need to know about the situation of happiness and attachment style in these students.\n\n\nOBJECTIVES\nThis study was aimed to assess the relationship between medical students' attachment styles and demographic characteristics.\n\n\nMATERIALS AND METHODS\nThis cross-sectional study was conducted on randomly selected students of Medical Sciences in Kurdistan University, in 2012. To collect data, Hazan and Shaver's attachment style measure and the Oxford Happiness Questionnaire were used. The results were analyzed using the SPSS software version 16 (IBM, Chicago IL, USA) and statistical analysis was performed via t-test, Chi-square test, and multiple regression tests.\n\n\nRESULTS\nSecure attachment style was the most common attachment style and the least common was ambivalent attachment style. Avoidant attachment style was more common among single persons than married people (P = 0.03). No significant relationship was observed between attachment style and gender and grade point average of the studied people. The mean happiness score of students was 62.71. In multivariate analysis, the variables of secure attachment style (P = 0.001), male gender (P = 0.005), and scholar achievement (P = 0.047) were associated with higher happiness score.\n\n\nCONCLUSION\nThe most common attachment style was secure attachment style, which can be a positive prognostic factor in medical students, helping them to manage stress. Higher frequency of avoidant attachment style among single persons, compared with married people, is mainly due to their negative attitude toward others and failure to establish and maintain relationships with others.", "title": "" }, { "docid": "f845508acabb985dd80c31774776e86b", "text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.", "title": "" }, { "docid": "e051c1dafe2a2f45c48a79c320894795", "text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.", "title": "" }, { "docid": "315af705427ee4363fe4614dc72eb7a7", "text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.", "title": "" }, { "docid": "05d3d0d62d2cff27eace1fdfeecf9814", "text": "This article solves the equilibrium problem in a pure-exchange, continuous-time economy in which some agents face information costs or other types of frictions effectively preventing them from investing in the stock market. Under the assumption that the restricted agents have logarithmic utilities, a complete characterization of equilibrium prices and consumption/ investment policies is provided. A simple calibration shows that the model can help resolve some of the empirical asset pricing puzzles.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "1530571213fb98e163cb3cf45cfe9cc6", "text": "We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.", "title": "" }, { "docid": "a42b9567dfc9e9fe92bc9aeb38ef5e5a", "text": "This paper presents a physical model for planar spiral inductors on silicon, which accounts for eddy current effect in the conductor, crossover capacitance between the spiral and center-tap, capacitance between the spiral and substrate, substrate ohmic loss, and substrate capacitance. The model has been confirmed with measured results of inductors having a wide range of layout and process parameters. This scalable inductor model enables the prediction and optimization of inductor performance.", "title": "" }, { "docid": "1301030c091eeb23d43dd3bfa6763e77", "text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.", "title": "" }, { "docid": "88cf3138707e74f9efec06f039d7ea76", "text": "In the electricity sector, energy conservation through technological and behavioral change is estimated to have a savings potential of 123 million metric tons of carbon per year, which represents 20% of US household direct emissions in the United States. In this article, we investigate the effectiveness of nonprice information strategies to motivate conservation behavior. We introduce environment and health-based messaging as a behavioral strategy to reduce energy use in the home and promote energy conservation. In a randomized controlled trial with real-time appliance-level energy metering, we find that environment and health-based information strategies, which communicate the environmental and public health externalities of electricity production, such as pounds of pollutants, childhood asthma, and cancer, outperform monetary savings information to drive behavioral change in the home. Environment and health-based information treatments motivated 8% energy savings versus control and were particularly effective on families with children, who achieved up to 19% energy savings. Our results are based on a panel of 3.4 million hourly appliance-level kilowatt-hour observations for 118 residences over 8 mo. We discuss the relative impacts of both cost-savings information and environmental health messaging strategies with residential consumers.", "title": "" }, { "docid": "4ac12c76112ff2085c4701130448f5d5", "text": "A key point in the deployment of new wireless services is the cost-effective extension and enhancement of the network's radio coverage in indoor environments. Distributed Antenna Systems using Fiber-optics distribution (F-DAS) represent a suitable method of extending multiple-operator radio coverage into indoor premises, tunnels, etc. Another key point is the adoption of MIMO (Multiple Input — Multiple Output) transmission techniques which can exploit the multipath nature of the radio link to ensure reliable, high-speed wireless communication in hostile environments. In this paper novel indoor deployment solutions based on Radio over Fiber (RoF) and distributed-antenna MIMO techniques are presented and discussed, highlighting their potential in different cases.", "title": "" }, { "docid": "997eb22a6f924bc560ede89e37dc4620", "text": "We illustrate an architecture for a conversational agent based on a modular knowledge representation. This solution provides intelligent conversational agents with a dynamic and flexible behavior. The modularity of the architecture allows a concurrent and synergic use of different techniques, making it possible to use the most adequate methodology for the management of a specific characteristic of the domain, of the dialogue, or of the user behavior. We show the implementation of a proof-of-concept prototype: a set of modules exploiting different knowledge representation techniques and capable to differently manage conversation features has been developed. Each module is automatically triggered through a component, named corpus callosum, whose task is to choose, time by time, the most adequate chatbot knowledge section to activate.", "title": "" }, { "docid": "0f20cfce49eaa9f447fc45b1d4c04be0", "text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.", "title": "" }, { "docid": "9f7aa5978855e173a45d443e46cbf5dd", "text": "Online gaming franchises such as World of Tanks, Defense of the Ancients, and StarCraft have attracted hundreds of millions of users who, apart from playing the game, also socialize with each other through gaming and viewing gamecasts. As a form of User Generated Content (UGC), gamecasts play an important role in user entertainment and gamer education. They deserve the attention of both industrial partners and the academic communities, corresponding to the large amount of revenue involved and the interesting research problems associated with UGC sites and social networks. Although previous work has put much effort into analyzing general UGC sites such as YouTube, relatively little is known about the gamecast sharing sites. In this work, we provide the first comprehensive study of gamecast sharing sites, including commercial streaming-based sites such as Amazon’s Twitch.tv and community-maintained replay-based sites such as WoTreplays. We collect and share a novel dataset on WoTreplays that includes more than 380,000 game replays, shared by more than 60,000 creators with more than 1.9 million gamers. Together with an earlier published dataset on Twitch.tv, we investigate basic characteristics of gamecast sharing sites, and we analyze the activities of their creators and spectators. Among our results, we find that (i) WoTreplays and Twitch.tv are both fast-consumed repositories, with millions of gamecasts being uploaded, viewed, and soon forgotten; (ii) both the gamecasts and the creators exhibit highly skewed popularity, with a significant heavy tail phenomenon; and (iii) the upload and download preferences of creators and spectators are different: while the creators emphasize their individual skills, the spectators appreciate team-wise tactics. Our findings provide important knowledge for infrastructure and service improvement, for example, in the design of proper resource allocation mechanisms that consider future gamecasting and in the tuning of incentive policies that further help player retention.", "title": "" }, { "docid": "f9d333d7d8aa3f7fb834b202a3b10a3b", "text": "Human skin is the largest organ in our body which provides protection against heat, light, infections and injury. It also stores water, fat, and vitamin. Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Skin cancer is the most commonly diagnosed type of cancer among men and women. Exposure to UV rays, modernize diets, smoking, alcohol and nicotine are the main cause. Cancer is increasingly recognized as a critical public health problem in Ethiopia. There are three type of skin cancer and they are recognized based on their own properties. In view of this, a digital image processing technique is proposed to recognize and predict the different types of skin cancers using digital image processing techniques. Sample skin cancer image were taken from American cancer society research center and DERMOFIT which are popular and widely focuses on skin cancer research. The classification system was supervised corresponding to the predefined classes of the type of skin cancer. Combining Self organizing map (SOM) and radial basis function (RBF) for recognition and diagnosis of skin cancer is by far better than KNN, Naïve Bayes and ANN classifier. It was also showed that the discrimination power of morphology and color features was better than texture features but when morphology, texture and color features were used together the classification accuracy was increased. The best classification accuracy (88%, 96.15% and 95.45% for Basal cell carcinoma, Melanoma and Squamous cell carcinoma respectively) were obtained using combining SOM and RBF. The overall classification accuracy was 93.15%.", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "70e89d5d0b886b1c32b1f1b8c01db99b", "text": "In clinical dictation, speakers try to be as concise as possible to save time, often resulting in utterances without explicit punctuation commands. Since the end product of a dictated report, e.g. an out-patient letter, does require correct orthography, including exact punctuation, the latter need to be restored, preferably by automated means. This paper describes a method for punctuation restoration based on a stateof-the-art stack of NLP and machine learning techniques including B-RNNs with an attention mechanism and late fusion, as well as a feature extraction technique tailored to the processing of medical terminology using a novel vocabulary reduction model. To the best of our knowledge, the resulting performance is superior to that reported in prior art on similar tasks.", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "8a3e49797223800cb644fe2b819f9950", "text": "In this paper, we present machine learning approaches for characterizing and forecasting the short-term demand for on-demand ride-hailing services. We propose the spatio-temporal estimation of the demand that is a function of variable effects related to traffic, pricing and weather conditions. With respect to the methodology, a single decision tree, bootstrap-aggregated (bagged) decision trees, random forest, boosted decision trees, and artificial neural network for regression have been adapted and systematically compared using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and slope. To better assess the quality of the models, they have been tested on a real case study using the data of DiDi Chuxing, the main on-demand ride-hailing service provider in China. In the current study, 199,584 time-slots describing the spatio-temporal ride-hailing demand has been extracted with an aggregated-time interval of 10 mins. All the methods are trained and validated on the basis of two independent samples from this dataset. The results revealed that boosted decision trees provide the best prediction accuracy (RMSE=16.41), while avoiding the risk of over-fitting, followed by artificial neural network (20.09), random forest (23.50), bagged decision trees (24.29) and single decision tree (33.55). ∗Currently under review for publication †Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium, Email: ismail.saadi@ulg.ac.be ‡Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: melvin.wong@polymtl.ca §Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: bilal.farooq@polymtl.ca ¶Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ‖Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ar X iv :1 70 3. 02 43 3v 1 [ cs .L G ] 7 M ar 2 01 7", "title": "" }, { "docid": "050443f5d84369f942c3f611775d37ed", "text": "A variety of methods for computing factor scores can be found in the psychological literature. These methods grew out of a historic debate regarding the indeterminate nature of the common factor model. Unfortunately, most researchers are unaware of the indeterminacy issue and the problems associated with a number of the factor scoring procedures. This article reviews the history and nature of factor score indeterminacy. Novel computer programs for assessing the degree of indeterminacy in a given analysis, as well as for computing and evaluating different types of factor scores, are then presented and demonstrated using data from the Wechsler Intelligence Scale for Children-Third Edition. It is argued that factor score indeterminacy should be routinely assessed and reported as part of any exploratory factor analysis and that factor scores should be thoroughly evaluated before they are reported or used in subsequent statistical analyses.", "title": "" } ]
scidocsrr
7681eb74db675553642af04857196151
Innovation, openness & platform control
[ { "docid": "c7d629a83de44e17a134a785795e26d8", "text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.", "title": "" }, { "docid": "686045e2dae16aba16c26b8ccd499731", "text": "It has been argued that platform technology owners cocreate business value with other firms in their platform ecosystems by encouraging complementary invention and exploiting indirect network effects. In this study, we examine whether participation in an ecosystem partnership improves the business performance of small independent software vendors (ISVs) in the enterprise software industry and how appropriability mechanisms influence the benefits of partnership. By analyzing the partnering activities and performance indicators of a sample of 1,210 small ISVs over the period 1996–2004, we find that joining a major platform owner’s platform ecosystem is associated with an increase in sales and a greater likelihood of issuing an initial public offering (IPO). In addition, we show that these impacts are greater when ISVs have greater intellectual property rights or stronger downstream capabilities. This research highlights the value of interoperability between software products, and stresses that value cocreation and appropriation are not mutually exclusive strategies in interfirm collaboration.", "title": "" } ]
[ { "docid": "e451eacd16b0dda85c0f576554b26d15", "text": "The major challenge faced by the fifth generation (5G) mobile network is higher spectral efficiency and massive connectivity, i.e., the target spectrum efficiency is 3 times over 4G, and the target connection density is one million devices per square kilometer. These requirements are difficult to be satisfied with orthogonal multiple access (OMA) schemes. Non-orthogonal multiple access (NOMA) has thus been proposed as a promising candidate to address some of the challenges for 5G. In this paper, a comprehensive survey of different candidate NOMA schemes for 5G is presented, where the usage scenarios of 5G and the application requirements for NOMA are firstly discussed. A general framework of NOMA scheme is established and the features of typical NOMA schemes are analyzed and compared. We focus on the recent progress and challenge of NOMA in standardization of international telecommunication union (ITU), and 3rd generation partnership project (3GPP). In addition, prototype development and future research directions are also provided respectively.", "title": "" }, { "docid": "6a3fe7de176dcca7da54d927d8901e38", "text": "We demonstrate how two Novint Falcons, inexpensive commercially available haptic devices, can be modified to a create a reconfigurable five-degreeof-freedom (5-DOF) haptic device for less than $500 (including the two Falcons). The device is intended as an educational tool to allow a broader range of students to experience force and torque feedback, rather than the 3-DOF force feedback typical of inexpensive devices. We also explain how to implement a 5-DOF force/torque control system with gravity compensation.", "title": "" }, { "docid": "8ff6325fed2f8f3323833f6ac446eb3d", "text": "Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we extend MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, that is `p-norms with p ≥ 1. This interleaved optimization is much faster than the commonly used wrapper approaches, as demonstrated on several data sets. A theoretical analysis and an experiment on controlled artificial data shed light on the appropriateness of sparse, non-sparse and `∞-norm MKL in various scenarios. Importantly, empirical applications of `p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that surpass the state-of-the-art. Data sets, source code to reproduce the experiments, implementations of the algorithms, and further information are available at http://doc.ml.tu-berlin.de/nonsparse_mkl/.", "title": "" }, { "docid": "a2c9c975788253957e6bbebc94eb5a4b", "text": "The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries.", "title": "" }, { "docid": "3baf11f31351e92c7ff56b066434ae2c", "text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.", "title": "" }, { "docid": "49a66c642e8804122e0200429de21c45", "text": "As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant manner.", "title": "" }, { "docid": "90bb7ab528877c922758b44b102bf4e8", "text": "Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train state-of- the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets.", "title": "" }, { "docid": "3830c568e6b9b56bab1c971d2a99757c", "text": "Lagrangian theory provides a diverse set of tools for continuous motion analysis. Existing work shows the applicability of Lagrangian method for video analysis in several aspects. In this paper we want to utilize the concept of Lagrangian measures to detect violent scenes. Therefore we propose a local feature based on the SIFT algorithm that incooperates appearance and Lagrangian based motion models. We will show that the temporal interval of the used motion information is a crucial aspect and study its influence on the classification performance. The proposed LaSIFT feature outperforms other state-of-the-art local features, in particular in uncontrolled realistic video data. We evaluate our algorithm with a bag-of-word approach. The experimental results show a significant improvement over the state-of-the-art on current violent detection datasets, i.e. Crowd Violence, Hockey Fight.", "title": "" }, { "docid": "90fe763855ca6c4fabe4f9d042d5c61a", "text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.", "title": "" }, { "docid": "45a98a82d462d8b12445cbe38f20849d", "text": "Proliferative verrucous leukoplakia (PVL) is an aggressive form of oral leukoplakia that is persistent, often multifocal, and refractory to treatment with a high risk of recurrence and malignant transformation. This article describes the clinical aspects and histologic features of a case that demonstrated the typical behavior pattern in a long-standing, persistent lesion of PVL of the mandibular gingiva and that ultimately developed into squamous cell carcinoma. Prognosis is poor for this seemingly harmless-appearing white lesion of the oral mucosa.", "title": "" }, { "docid": "1f7f0b82bf5822ee51313edfd1cb1593", "text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.", "title": "" }, { "docid": "a239e75cb06355884f65f041e215b902", "text": "BACKGROUND\nNecrotizing enterocolitis (NEC) and nosocomial sepsis are associated with increased morbidity and mortality in preterm infants. Through prevention of bacterial migration across the mucosa, competitive exclusion of pathogenic bacteria, and enhancing the immune responses of the host, prophylactic enteral probiotics (live microbial supplements) may play a role in reducing NEC and associated morbidity.\n\n\nOBJECTIVES\nTo compare the efficacy and safety of prophylactic enteral probiotics administration versus placebo or no treatment in the prevention of severe NEC and/or sepsis in preterm infants.\n\n\nSEARCH STRATEGY\nFor this update, searches were made of MEDLINE (1966 to October 2010), EMBASE (1980 to October 2010), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2010), and abstracts of annual meetings of the Society for Pediatric Research (1995 to 2010).\n\n\nSELECTION CRITERIA\nOnly randomized or quasi-randomized controlled trials that enrolled preterm infants < 37 weeks gestational age and/or < 2500 g birth weight were considered. Trials were included if they involved enteral administration of any live microbial supplement (probiotics) and measured at least one prespecified clinical outcome.\n\n\nDATA COLLECTION AND ANALYSIS\nStandard methods of the Cochrane Collaboration and its Neonatal Group were used to assess the methodologic quality of the trials, data collection and analysis.\n\n\nMAIN RESULTS\nSixteen eligible trials randomizing 2842 infants were included. Included trials were highly variable with regard to enrollment criteria (i.e. birth weight and gestational age), baseline risk of NEC in the control groups, timing, dose, formulation of the probiotics, and feeding regimens. Data regarding extremely low birth weight infants (ELBW) could not be extrapolated. In a meta-analysis of trial data, enteral probiotics supplementation significantly reduced the incidence of severe NEC (stage II or more) (typical RR 0.35, 95% CI 0.24 to 0.52) and mortality (typical RR 0.40, 95% CI 0.27 to 0.60). There was no evidence of significant reduction of nosocomial sepsis (typical RR 0.90, 95% CI 0.76 to 1.07). The included trials reported no systemic infection with the probiotics supplemental organism. The statistical test of heterogeneity for NEC, mortality and sepsis was insignificant.\n\n\nAUTHORS' CONCLUSIONS\nEnteral supplementation of probiotics prevents severe NEC and all cause mortality in preterm infants. Our updated review of available evidence supports a change in practice. More studies are needed to assess efficacy in ELBW infants and assess the most effective formulation and dose to be utilized.", "title": "" }, { "docid": "11c903f0dea5895a4f14c5625aa1554b", "text": "Contemporary mobile devices are the result of an evolution process, during which computational and networking capabilities have been continuously pushed to keep pace with the constantly growing workload requirements. This has allowed devices such as smartphones, tablets, and personal digital assistants to perform increasingly complex tasks, up to the point of efficiently replacing traditional options such as desktop computers and notebooks. However, due to their portability and size, these devices are more prone to theft, to become compromised, or to be exploited for attacks and other malicious activity. The need for investigation of the aforementioned incidents resulted in the creation of the Mobile Forensics (MF) discipline. MF, a sub-domain of digital forensics, is specialized in extracting and processing evidence from mobile devices in such a way that attacking entities and actions are identified and traced. Beyond its primary research interest on evidence acquisition from mobile devices, MF has recently expanded its scope to encompass the organized and advanced evidence representation and analysis of future malicious entity behavior. Nonetheless, data acquisition still remains its main focus. While the field is under continuous research activity, new concepts such as the involvement of cloud computing in the MF ecosystem and the evolution of enterprise mobile solutions—particularly mobile device management and bring your own device—bring new opportunities and issues to the discipline. The current article presents the research conducted within the MF ecosystem during the last 7 years, identifies the gaps, and highlights the differences from past research directions, and addresses challenges and open issues in the field.", "title": "" }, { "docid": "6b3abd92478a641d992ed4f4f08f52d5", "text": "In this article, we consider the robust estimation of a location parameter using Mestimators. We propose here to couple this estimation with the robust scale estimate proposed in [Dahyot and Wilson, 2006]. The resulting procedure is then completely unsupervised. It is applied to camera motion estimation and moving object detection in videos. Experimental results on different video materials show the adaptability and the accuracy of this new robust approach.", "title": "" }, { "docid": "860894abbbafdcb71178cb9ddd173970", "text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.", "title": "" }, { "docid": "7ddf5c53b9ee56cb92c67253f495aafd", "text": "Two-way arrays or matrices are often not enough to represent all the information in the data and standard two-way analysis techniques commonly applied on matrices may fail to find the underlying structures in multi-modal datasets. Multiway data analysis has recently become popular as an exploratory analysis tool in discovering the structures in higher-order datasets, where data have more than two modes. We provide a review of significant contributions in the literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, social network analysis, text mining and computer vision.", "title": "" }, { "docid": "b2db6db73699ecc66f33e2f277cf055b", "text": "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our experimental results on challenging benchmark video tracking datasets show that our tracker is competitive with state-of-the-art approaches while maintaining low computational cost.", "title": "" }, { "docid": "f29e5dae294434aa54ad2419e457b1eb", "text": "Person re-identification aims to match images of the same person across disjoint camera views, which is a challenging problem in video surveillance. The major challenge of this task lies in how to preserve the similarity of the same person against large variations caused by complex backgrounds, mutual occlusions and different illuminations, while discriminating the different individuals. In this paper, we present a novel deep ranking model with feature learning and fusion by learning a large adaptive margin between the intra-class distance and inter-class distance to solve the person re-identification problem. Specifically, we organize the training images into a batch of pairwise samples. Treating these pairwise samples as inputs, we build a novel part-based deep convolutional neural network (CNN) to learn the layered feature representations by preserving a large adaptive margin. As a result, the final learned model can effectively find out the matched target to the anchor image among a number of candidates in the gallery image set by learning discriminative and stable feature representations. Overcoming the weaknesses of conventional fixed-margin loss functions, our adaptive margin loss function is more appropriate for the dynamic feature space. On four benchmark datasets, PRID2011, Market1501, CUHK01 and 3DPeS, we extensively conduct comparative evaluations to demonstrate the advantages of the proposed method over the state-of-the-art approaches in person re-identification.", "title": "" }, { "docid": "7838934c12f00f987f6999460fc38ca1", "text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.", "title": "" }, { "docid": "9cb703cf5394a77bd15c0ad356928f04", "text": "Studies were undertaken to evaluate locally available subtrates for use in a culture medium for Phytophthora infestans (Mont.) de Bary employing a protocol similar to that used for the preparation of rye A agar. Test media preparations were assessed for growth, sporulation, oospore formation, and long-term storage of P. infestans. Media prepared from grains and fresh produce available in Thailand and Asian countries such as black bean (BB), red kidney bean (RKB), black sesame (BSS), sunflower (SFW) and sweet corn supported growth and sporulation of representative isolates compared with rye A, V8 and oat meal media. Oospores were successfully formed on BB and RKB media supplemented with β-sitosterol. The BB, RKB, BSS and SFW media maintained viable fungal cultures with sporulation ability for 8 months, similar to the rye A medium. Three percent and 33% of 135 isolates failed to grow on V8 and SFW media, respectively.", "title": "" } ]
scidocsrr
f37758f413116485b19c3bd274d4d426
Learning Beyond Human Expertise with Generative Models for Dental Restorations
[ { "docid": "7aedb5ffa83448c21c33e0573a9a41a2", "text": "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S-GAN). Our S-GAN has two components: the StructureGAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "title": "" } ]
[ { "docid": "4abceedb1f6c735a8bc91bc811ce4438", "text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.", "title": "" }, { "docid": "b861ea3b6ea6d29e1c225609db069fd5", "text": "A single probe feeding stacked microstrip antenna is presented to obtain dual-band circularly polarized (CP) characteristics using double layers of truncated square patches. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz for the global positioning system (GPS). With the optimized design, the measured axial ratio (AR) bandwidths with the centre frequencies of L1 and L2 are both greater than 50 MHz, while the impedance characteristics within AR bandwidth satisfy the requirement of VSWR less than 2. At L1 and L2 frequencies, the AR measured is 0.7 dB and 0.3 dB, respectively.", "title": "" }, { "docid": "df4d0112eecfcc5c6c57784d1a0d010d", "text": "2 The design and measured results are reported on three prototype DC-DC converters which successfully demonstrate the design techniques of this thesis and the low-power enabling capabilities of DC-DC converters in portable applications. Voltage scaling for low-power throughput-constrained digital signal processing is reviewed and is shown to provide up to an order of magnitude power reduction compared to existing 3.3 V standards when enabled by high-efficiency low-voltage DC-DC conversion. A new ultra-low-swing I/O strategy, enabled by an ultra-low-voltage and low-power DCDC converter, is used to reduce the power of high-speed inter-chip communication by greater than two orders of magnitude. Dynamic voltage scaling is proposed to dynamically trade general-purpose processor throughput for energy-efficiency, yielding up to an order of magnitude improvement in the average energy per operation of the processor. This is made possible by a new class of voltage converter, called the dynamic DC-DC converter, whose primary performance objectives and design considerations are introduced in this thesis. Robert W. Brodersen, Chairman of Committee Table of", "title": "" }, { "docid": "01ccb35abf3eed71191dc8638e58f257", "text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.", "title": "" }, { "docid": "8c12dfc5fa23d5eabb8ae29101cb6161", "text": "Purpose – Using Internet Archive’s Wayback Machine, higher education web sites were retrospectively analyzed to study the effects that technological advances in web design have had on accessibility for persons with disabilities. Design/methodology/approach – A convenience sample of higher education web sites was studied for years 1997-2002. The homepage and pages 1-level down were evaluated. Web accessibility barrier (WAB) and complexity scores were calculated. Repeated measures analysis of variance (ANOVA) was used to determine trends in the data and Pearson’s correlation (r) was computed to evaluate the relationship between accessibility and complexity. Findings – Higher education web sites become progressively inaccessible as complexity increases. Research limitations/implications – The WAB score is a proxy of web accessibility. While the WAB score can give an indication of the accessibility of a web site, it cannot differentiate between barriers posing minimal limitations and those posing absolute inaccessibility. A future study is planned to have users with disabilities examine web sites with differing WAB scores to correlate how well the WAB score is gauging accessibility of web sites from the perspective of the user. Practical implications – Findings from studies such as this can lead to improved guidelines, policies, and overall awareness of web accessibility for persons with disabilities. Originality/value – There are limited studies that have taken a longitudinal look at the accessibility of web sites and explored the reasons for the trend of decreasing accessibility.", "title": "" }, { "docid": "b765a75438d9abd381038e1b84128004", "text": "Implementing a complex spelling program using a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) remains a challenge due to difficulties in stimulus presentation and target identification. This study aims to explore the feasibility of mixed frequency and phase coding in building a high-speed SSVEP speller with a computer monitor. A frequency and phase approximation approach was developed to eliminate the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8-15 Hz with a 1 Hz interval) and four phases (0°, 90°, 180°, and 270°). A multi-channel approach incorporating Canonical Correlation Analysis (CCA) and SSVEP training data was proposed for target identification. In a simulated online experiment, at a spelling rate of 40 characters per minute, the system obtained an averaged information transfer rate (ITR) of 166.91 bits/min across 13 subjects with a maximum individual ITR of 192.26 bits/min, the highest ITR ever reported in electroencephalogram (EEG)-based BCIs. The results of this study demonstrate great potential of a high-speed SSVEP-based BCI in real-life applications.", "title": "" }, { "docid": "41cf1b873d69f15cbc5fa25e849daa61", "text": "Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).", "title": "" }, { "docid": "6c6206e330f0d9b7f9ed68f8af78b117", "text": "This paper deals with the design, manufacture and test of a high efficiency power amplifier for L-band space borne applications. The circuit operates with a single 36 mm gate periphery GaN HEMT power bar die allowing both improved integration and performance as compared with standard HPA design in a similar RF power range. A huge effort dedicated to the device's characterization and modeling has eased the circuit optimization leaning on the multi-harmonics impedances synthesis. Test results demonstrate performance up to 140 W RF output power with an associated 60% PAE for a limited 3.9 dB gain compression under 50 V supply voltage using a single GaN power bar.", "title": "" }, { "docid": "751b853f780fc8047ff73ce646b68cd6", "text": "This paper builds on previous research in the light field area of image-based rendering. We present a new reconstruction filter that significantly reduces the “ghosting” artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance. By improving the rendering quality achievable from undersampled light fields, our method allows acceptable images to be generated from smaller image sets. We present both frequency and spatial domain justifications for our techniques. We also present a practical framework for implementing the reconstruction filter in multiple rendering passes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation ― Viewing algorithms; I.3.6 [Computer Graphics]: Methodologies and Techniques ― Graphics data structures and data types; I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ― Sampling", "title": "" }, { "docid": "2f389011aad9236f174b15e37dc73cd3", "text": "A new efficient optimization method, called ‘Teaching–Learning-Based Optimization (TLBO)’, is proposed in this paper for the optimization of mechanical design problems. This method works on the effect of influence of a teacher on learners. Like other nature-inspired algorithms, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. The population is considered as a group of learners or a class of learners. The process of TLBO is divided into two parts: the first part consists of the ‘Teacher Phase’ and the second part consists of the ‘Learner Phase’. ‘Teacher Phase’ means learning from the teacher and ‘Learner Phase’ means learning by the interaction between learners. The basic philosophy of the TLBO method is explained in detail. To check the effectiveness of the method it is tested on five different constrained benchmark test functions with different characteristics, four different benchmark mechanical design problems and six mechanical design optimization problems which have real world applications. The effectiveness of the TLBO method is compared with the other populationbased optimization algorithms based on the best solution, average solution, convergence rate and computational effort. Results show that TLBO is more effective and efficient than the other optimization methods for the mechanical design optimization problems considered. This novel optimization method can be easily extended to other engineering design optimization problems. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e8755da242b6252eb516aec6e74d42c0", "text": "Cloud data provenance, or \"what has happened to my data in the cloud\", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems.", "title": "" }, { "docid": "a490c396ff6d47e11f35d2f08776b7fc", "text": "The present study examined the nature of social support exchanged within an online HIV/AIDS support group. Content analysis was conducted with reference to five types of social support (information support, tangible assistance, esteem support, network support, and emotional support) on 85 threads (1,138 messages). Our analysis revealed that many of the messages offered informational and emotional support, followed by esteem support and network support, with tangible assistance the least frequently offered. Results suggest that this online support group is a popular forum through which individuals living with HIV/AIDS can offer social support. Our findings have implications for health care professionals who support individuals living with HIV/AIDS.", "title": "" }, { "docid": "ecab65461852051278a59482ad49c225", "text": "We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x; y) to (x; xy)) is universal in the sense that all unitary operations on arbitrarily many bits n (U(2 n)) can be expressed as compositions of these gates. We investigate the number of the above gates required to implement other gates, such as generalized Deutsch-Toooli gates, that apply a speciic U(2) transformation to one input bit if and only if the logical AND of all remaining input bits is satissed. These gates play a central role in many proposed constructions of quantum computational networks. We derive upper and lower bounds on the exact number of elementary gates required to build up a variety of two-and three-bit quantum gates, the asymptotic number required for n-bit Deutsch-Toooli gates, and make some observations about the number required for arbitrary n-bit unitary operations.", "title": "" }, { "docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f", "text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.", "title": "" }, { "docid": "191a81ae4b60c48a01deb8d64d5c7f42", "text": "This paper elaborates a study conducted to propose a folktale conceptual model based on folktale classification systems of type, motif, and function. Globally, three distinguish folktale classification systems exist and have been used for many years nonetheless not actually converge and achieve agreement on classification issues. The study aims to develop a conceptual model that visually depicts the combination and connection of the three folktale classification systems. The method opted for the conceptual model development is pictorial representation. It is hoped that the conceptual model developed would be an early platform to subsequently catalyze more robust and cohesive folktale classification system.", "title": "" }, { "docid": "c91df82c01cbf7d1f2666c43e96a5787", "text": "The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, store, transmit and process massive amounts of complex high-dimensional data. Many of these advances have relied on the observation that, even though these data sets are high-dimensional, their intrinsic dimension is often much smaller than the dimension of the ambient space. In computer vision, for example, the number of pixels in an image can be rather large, yet most computer vision models use only a few parameters to describe the appearance, geometry and dynamics of a scene. This has motivated the development of a number of techniques for finding a low-dimensional representation of a high-dimensional data set. Conventional techniques, such as Principal Component Analysis (PCA), assume that the data is drawn from a single low-dimensional subspace of a high-dimensional space. Such approaches have found widespread applications in many fields, e.g., pattern recognition, data compression, image processing, bioinformatics, etc. In practice, however, the data points could be drawn from multiple subspaces and the membership of the data points to the subspaces might be unknown. For instance, a video sequence could contain several moving objects and different subspaces might be needed to describe the motion of different objects in the scene. Therefore, there is a need to simultaneously cluster the data into multiple subspaces and find a low-dimensional subspace fitting each group of points. This problem, known as subspace clustering, has found numerous applications in computer vision (e.g., image segmentation [1], motion segmentation [2] and face clustering [3]), image processing (e.g., image representation and compression [4]) and systems theory (e.g., hybrid system identification [5]). A number of approaches to subspace clustering have been proposed in the past two decades. A review of methods from the data mining community can be found in [6]. This article will present methods from the machine learning and computer vision communities, including algebraic methods [7, 8, 9, 10], iterative methods [11, 12, 13, 14, 15], statistical methods [16, 17, 18, 19, 20], and spectral clustering-based methods [7, 21, 22, 23, 24, 25, 26, 27]. We review these methods, discuss their advantages and disadvantages, and evaluate their performance on the motion segmentation and face clustering problems. P", "title": "" }, { "docid": "3155879d5264ad723de6051075d47ee2", "text": "We have shown that there is a difference between individuals in their tendency to deposit DNA on an item when it is touched. While a good DNA shedder may leave behind a full DNA profile immediately after hand washing, poor DNA shedders may only do so when their hands have not been washed for a period of 6h. We have also demonstrated that transfer of DNA from one individual (A) to another (B) and subsequently to an object is possible under specific laboratory conditions using the AMPFISTR SGM Plus multiplex at both 28 and 34 PCR cycles. This is a form of secondary transfer. If a 30 min or 1h delay was introduced before contact of individual B with the object then at 34 cycles a mixture of profiles from both individuals was recovered. We have also determined that the quantity and quality of DNA profiles recovered is dependent upon the particular individuals involved in the transfer process. The findings reported here are preliminary and further investigations are underway in order to further add to understanding of the issues of DNA transfer and persistence.", "title": "" }, { "docid": "5c2f115e0159d15a87904e52879c1abf", "text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.", "title": "" }, { "docid": "dcd2bb092e7d4325a64d8f3b9d729f94", "text": "Uninterruptible power supplies (UPS) are widely used to provide reliable and high-quality power to critical loads in all grid conditions. This paper proposes a nonisolated online UPS system. The proposed system consists of bridgeless PFC boost rectifier, battery charger/discharger, and an inverter. A new battery charger/discharger has been implemented which ensures the bidirectional flow of power between dc link and battery bank, reducing the battery bank voltage to only 24V, and regulates the dc-link voltage during the battery power mode. Operating batteries in parallel improves the battery performance and resolve the problems related to conventional battery banks that arrange batteries in series. A new control method, integrating slide mode and proportional-resonant control, for the inverter has been proposed which regulates the output voltage for both linear and nonlinear loads. The controller exhibits excellent performance during transients and step changes in load. The operating principle and experimental results of 1-kVA prototype have been presented for validation of the proposed system.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" } ]
scidocsrr
6f75cbc55edf5728ea099300c7dedca0
Summarization of Egocentric Videos: A Comprehensive Survey
[ { "docid": "0ff159433ed8958109ba8006822a2d67", "text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.", "title": "" }, { "docid": "c2f1750b668ec7acdd53249773081927", "text": "Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.", "title": "" } ]
[ { "docid": "79a16052e5e6a44ca6f9fef8ebac3c2d", "text": "Plants are among the earth's most useful and beautiful products of nature. Plants have been crucial to mankind's survival. The urgent need is that many plants are at the risk of extinction. About 50% of ayurvedic medicines are prepared using plant leaves and many of these plant species belong to the endanger group. So it is indispensable to set up a database for plant protection. We believe that the first step is to teach a computer how to classify plants. Leaf /plant identification has been a challenge for many researchers. Several researchers have proposed various techniques. In this paper we have proposed a novel framework for recognizing and identifying plants using shape, vein, color, texture features which are combined with Zernike movements. Radial basis probabilistic neural network (RBPNN) has been used as a classifier. To train RBPNN we use a dual stage training algorithm which significantly enhances the performance of the classifier. Simulation results on the Flavia leaf dataset indicates that the proposed method for leaf recognition yields an accuracy rate of 93.82%", "title": "" }, { "docid": "ef62b0e14f835a36c3157c1ae0f858e5", "text": "Algorithms based on Convolutional Neural Network (CNN) have recently been applied to object detection applications, greatly improving their performance. However, many devices intended for these algorithms have limited computation resources and strict power consumption constraints, and are not suitable for algorithms designed for GPU workstations. This paper presents a novel method to optimise CNN-based object detection algorithms targeting embedded FPGA platforms. Given parameterised CNN hardware modules, an optimisation flow takes network architectures and resource constraints as input, and tunes hardware parameters with algorithm-specific information to explore the design space and achieve high performance. The evaluation shows that our design model accuracy is above 85% and, with optimised configuration, our design can achieve 49.6 times speed-up compared with software implementation.", "title": "" }, { "docid": "b70032a5ca8382ac6853535b499f4937", "text": "Centroid and spread are commonly used approaches in ranking fuzzy numbers. Some experts rank fuzzy numbers using centroid or spread alone while others tend to integrate them together. Although a lot of methods for ranking fuzzy numbers that are related to both approaches have been presented, there are still limitations whereby the ranking obtained is inconsistent with human intuition. This paper proposes a novel method for ranking fuzzy numbers that integrates the centroid point and the spread approaches and overcomes the limitations and weaknesses of most existing methods. Proves and justifications with regard to the proposed ranking method are also presented. 5", "title": "" }, { "docid": "f8082d18f73bee4938ab81633ff02391", "text": "Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f45b7caf3c599a6de835330c39599570", "text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.", "title": "" }, { "docid": "ff71838a3f8f44e30dc69ed2f9371bfc", "text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.", "title": "" }, { "docid": "5701585d5692b4b28da3132f4094fc9f", "text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.", "title": "" }, { "docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4", "text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.", "title": "" }, { "docid": "f11dc9f1978544823aeb61114d4f927f", "text": "This paper presents a passive radar system using GSM as illuminator of opportunity. The new feature is the used high performance uniform linear antenna (ULA) for extracting both the reference and the echo signal in a software defined radar. The signal processing steps used by the proposed scheme are detailed and the feasibility of the whole system is proved by measurements.", "title": "" }, { "docid": "ab4cada23ae2142e52c98a271c128c58", "text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.", "title": "" }, { "docid": "119dd2c7eb5533ece82cff7987f21dba", "text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.", "title": "" }, { "docid": "bb94ef2ab26fddd794a5b469f3b51728", "text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "ee732b213767471c29f12e7d00f4ded3", "text": "The increasing interest in scene text reading in multilingual environments raises the need to recognize and distinguish between different writing systems. In this paper, we propose a novel method for script identification in scene text using triplets of local convolutional features in combination with the traditional bag-of-visual-words model. Feature triplets are created by making combinations of descriptors extracted from local patches of the input images using a convolutional neural network. This approach allows us to generate a more descriptive codeword dictionary for the bag-of-visual-words model, as the low discriminative power of weak descriptors is enhanced by other descriptors in a triplet. The proposed method is evaluated on two public benchmark datasets for scene text script identification and a public dataset for script identification in video captions. The experiments demonstrate that our method outperforms the baseline and yields competitive results on all three datasets.", "title": "" }, { "docid": "7f711c94920e0bfa8917ad1b5875813c", "text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.", "title": "" }, { "docid": "d537214f407128585d6a4e6bab55a45b", "text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.", "title": "" }, { "docid": "fff9e38c618a6a644e3795bdefd74801", "text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.", "title": "" }, { "docid": "04c8ed83fce5c5052a23d02082a11f00", "text": "Usually, well-being has been measured by means of questionnaires or scales. Although most of these methods have a high level of reliability and validity, they present some limitations. In order to try to improve well-being assessment, in the present work, the authors propose a new complementary instrument: The Implicit Overall Well-Being Measure (IOWBM). The Implicit Association Test (IAT) was adapted to measure wellbeing by assessing associations of the self with well-being-related words. In the first study, the IOWBM showed good internal consistency and adequate temporal reliability. In the second study, it presented weak correlations with explicit well-being measures. The third study examined the validity of the measure, analyzing the effect of traumatic memories on implicit well-being. The results showed that people who remember a traumatic event presented low levels of implicit well-being compared with people in the control condition.", "title": "" }, { "docid": "28fb1491be87cc850200eddd5011315d", "text": "While Salsa and ChaCha are well known software oriented stream ciphers, since the work of Aumasson et al in FSE 2008 there aren’t many significant results against them. The basic model of their attack was to introduce differences in the IV bits, obtain biases after a few forward rounds, as well as to look at the Probabilistic Neutral Bits (PNBs) while reverting back. In this paper we first consider the biases in the forward rounds, and estimate an upper bound on the number of rounds till such biases can be observed. For this, we propose a hybrid model (under certain assumptions), where initially the nonlinear rounds as proposed by the designer are considered, and then we employ their linearized counterpart. The effect of reverting the rounds with the idea of PNBs is also considered. Based on the assumptions and analysis, we conclude that 12 rounds of Salsa and ChaCha should be considered sufficient for 256-bit keys under the current best known attack models.", "title": "" }, { "docid": "53bed9c8e439ed9dcb64b8724a3fc389", "text": "This paper presents the outcomes of research into an automatic classification system based on the lingual part of music. Two novel kinds of short features are extracted from lyrics using tf*idf and rhyme. Meta-learning algorithm is adapted to combine these two sets of features. Results show that our features promote the accuracy of classification and meta-learning algorithm is effective in fusing the two features.", "title": "" }, { "docid": "45dfa7f6b1702942b5abfb8de920d1c2", "text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.", "title": "" } ]
scidocsrr
ab115421d84a4bcab680d9dfeb9d9ef6
BAG OF REGION EMBEDDINGS VIA LOCAL CONTEXT UNITS FOR TEXT CLASSIFICATION
[ { "docid": "ac46e6176377612544bb74c064feed67", "text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.", "title": "" }, { "docid": "fe1bc993047a95102f4331f57b1f9197", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "c612ee4ad1b4daa030e86a59543ca53b", "text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.", "title": "" } ]
[ { "docid": "244c79d374bdbe44406fc514610e4ee7", "text": "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7918167cbceddcc24b4d22f094b167dd", "text": "This paper is presented the study of the social influence by using social features in fitness mobile applications and habit that persuades the working-aged people, in the context of continuous fitness mobile application usage to promote the physical activity. Our conceptual model consisted of Habit and Social Influence. The social features based on the Persuasive Technology (1) Normative Influence, (2) Social Comparison, (3) Competition, (4) Co-operation, and (5) Social Recognition were embedded in the Social Influence construct of UTAUT2 model. The questionnaires were an instrument for this study. The target group was 443 working-aged people who live in Thailand's central region. The results reveal that the factors significantly affecting Behavioral Intention toward Use Behavior are Normative Influence, Social Comparison, Competition, and Co-operation. Only the Social Recognition is insignificantly affecting Behavioral Intention to use fitness mobile applications. The Behavioral Intention and Habit also significantly support the Use Behavior. The social features in fitness mobile application should be developed to promote the physical activity.", "title": "" }, { "docid": "c6afc173351fe404f7c5b68d2a0bc0a8", "text": "BACKGROUND\nCombined traumatic brain injury (TBI) and hemorrhagic shock (HS) is highly lethal. In a nonsurvival model of TBI + HS, addition of high-dose valproic acid (VPA) (300 mg/kg) to hetastarch reduced brain lesion size and associated swelling 6 hours after injury; whether this would have translated into better neurologic outcomes remains unknown. It is also unclear whether lower doses of VPA would be neuroprotective. We hypothesized that addition of low-dose VPA to normal saline (NS) resuscitation would result in improved long-term neurologic recovery and decreased brain lesion size.\n\n\nMETHODS\nTBI was created in anesthetized swine (40-43 kg) by controlled cortical impact, and volume-controlled hemorrhage (40% volume) was induced concurrently. After 2 hours of shock, animals were randomized (n = 5 per group) to NS (3× shed blood) or NS + VPA (150 mg/kg). Six hours after resuscitation, packed red blood cells were transfused, and animals were recovered. Peripheral blood mononuclear cells were analyzed for acetylated histone-H3 at lysine-9. A Neurological Severity Score (NSS) was assessed daily for 30 days. Brain magnetic resonance imaging was performed on Days 3 and 10. Cognitive performance was assessed by training animals to retrieve food from color-coded boxes.\n\n\nRESULTS\nThere was a significant increase in histone acetylation in the NS + VPA-treated animals compared with NS treatment. The NS + VPA group demonstrated significantly decreased neurologic impairment and faster speed of recovery as well as smaller brain lesion size compared with the NS group. Although the final cognitive function scores were similar between the groups, the VPA-treated animals reached the goal significantly faster than the NS controls.\n\n\nCONCLUSION\nIn this long-term survival model of TBI + HS, addition of low-dose VPA to saline resuscitation resulted in attenuated neurologic impairment, faster neurologic recovery, smaller brain lesion size, and a quicker normalization of cognitive functions.", "title": "" }, { "docid": "28075920fae3e973911b299db86c792e", "text": "DNA methylation is a well-studied genetic modification crucial to regulate the functioning of the genome. Its alterations play an important role in tumorigenesis and tumor-suppression. Thus, studying DNA methylation data may help biomarker discovery in cancer. Since public data on DNA methylation become abundant – and considering the high number of methylated sites (features) present in the genome – it is important to have a method for efficiently processing such large datasets. Relying on big data technologies, we propose BIGBIOCL an algorithm that can apply supervised classification methods to datasets with hundreds of thousands of features. It is designed for the extraction of alternative and equivalent classification models through iterative deletion of selected features. We run experiments on DNA methylation datasets extracted from The Cancer Genome Atlas, focusing on three tumor types: breast, kidney, and thyroid carcinomas. We perform classifications extracting several methylated sites and their associated genes with accurate performance (accuracy>97%). Results suggest that BIGBIOCL can perform hundreds of classification iterations on hundreds of thousands of features in few hours. Moreover, we compare the performance of our method with other state-of-the-art classifiers and with a wide-spread DNA methylation analysis method based on network analysis. Finally, we are able to efficiently compute multiple alternative classification models and extract from DNA-methylation large datasets a set of candidate genes to be further investigated to determine their active role in cancer. BIGBIOCL, results of experiments, and a guide to carry on new experiments are freely available on GitHub at https://github.com/fcproj/BIGBIOCL.", "title": "" }, { "docid": "2568f7528049b4ffc3d9a8b4f340262b", "text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.", "title": "" }, { "docid": "75f916790044fab6e267c5c5ec5846b7", "text": "Detecting circles from a digital image is very important in shape recognition. In this paper, an efficient randomized algorithm (RCD) for detecting circles is presented, which is not based on the Hough transform (HT). Instead of using an accumulator for saving the information of the related parameters in the HT-based methods, the proposed RCD does not need an accumulator. The main concept used in the proposed RCD is that we first randomly select four edge pixels in the image and define a distance criterion to determine whether there is a possible circle in the image; after finding a possible circle, we apply an evidence-collecting process to further determine whether the possible circle is a true circle or not. Some synthetic images with different levels of noises and some realistic images containing circular objects with some occluded circles and missing edges have been taken to test the performance. Experimental results demonstrate that the proposed RCD is faster than other HT-based methods for the noise level between the light level and the modest level. For a heavy noise level, the randomized HT could be faster than the proposed RCD, but at the expense of massive memory requirements.c © 2001 Academic Press", "title": "" }, { "docid": "a50ea2739751249e2832cae2df466d0b", "text": "The Arabic Online Commentary (AOC) (Zaidan and Callison-Burch, 2011) is a large-scale repository of Arabic dialects with manual labels for 4 varieties of the language. Existing dialect identification models exploiting the dataset pre-date the recent boost deep learning brought to NLP and hence the data are not benchmarked for use with deep learning, nor is it clear how much neural networks can help tease the categories in the data apart. We treat these two limitations: We (1) benchmark the data, and (2) empirically test 6 different deep learning methods on the task, comparing peformance to several classical machine learning models under different conditions (i.e., both binary and multi-way classification). Our experimental results show that variants of (attention-based) bidirectional recurrent neural networks achieve best accuracy (acc) on the task, significantly outperforming all competitive baselines. On blind test data, our models reach 87.65% acc on the binary task (MSA vs. dialects), 87.4% acc on the 3-way dialect task (Egyptian vs. Gulf vs. Levantine), and 82.45% acc on the 4-way variants task (MSA vs. Egyptian vs. Gulf vs. Levantine). We release our benchmark for future work on the dataset.", "title": "" }, { "docid": "53df69bf8750a7e97f12b1fcac14b407", "text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.", "title": "" }, { "docid": "44e4797655292e97651924115fd8d711", "text": "Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.", "title": "" }, { "docid": "119ea9c1d6b2cf2063efaf4d5ed7e756", "text": "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.", "title": "" }, { "docid": "e0e7bece9dd69ac775824b2ed40965d8", "text": "In this paper, we consider an adaptive base-stock policy for a single-item inventory system, where the demand process is non-stationary. In particular, the demand process is an integrated moving average process of order (0, 1, 1), for which an exponential-weighted moving average provides the optimal forecast. For the assumed control policy we characterize the inventory random variable and use this to find the safety stock requirements for the system. From this characterization, we see that the required inventory, both in absolute terms and as it depends on the replenishment lead-time, behaves much differently for this case of non-stationary demand compared with stationary demand. We then show how the single-item model extends to a multistage, or supply-chain context; in particular we see that the demand process for the upstream stage is not only non-stationary but also more variable than that for the downstream stage. We also show that for this model there is no value from letting the upstream stages see the exogenous demand. The paper concludes with some observations about the practical implications of this work.", "title": "" }, { "docid": "6a6063c05941c026b083bfcc573520f8", "text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.", "title": "" }, { "docid": "8c3aaa5011c7974a18b17d2a604127b7", "text": "The threat of Distributed Denial of Service (DDoS) has become a major issue in network security and is difficult to detect because all DDoS traffics have normal packet characteristics. Various detection and defense algorithms have been studied. One of them is an entropy-based intrusion detection approach that is a powerful and simple way to identify abnormal conditions from network channels. However, the burden of computing information entropy values from heavy flow still exists. To reduce the computing time, we have developed a DDoS detection scheme using a compression entropy method. It allows us to significantly reduce the computation time for calculating information entropy. However, our experiment suggests that the compression entropy approach tends to be too sensitive to verify real network attacks and produces many false negatives. In this paper, we propose a fast entropy scheme that can overcome the issue of false negatives and will not increase the computational time. Our simulation shows that the fast entropy computing method not only reduced computational time by more than 90% compared to conventional entropy, but also increased the detection accuracy compared to conventional and compression entropy approaches.", "title": "" }, { "docid": "0116f3e12fbaf2705f36d658fdbe66bb", "text": "This paper presents a metric to quantify visual scene movement perceived inside a virtual environment (VE) and illustrates how this method could be used in future studies to determine a cybersickness dose value to predict levels of cybersickness in VEs. Sensory conflict theories predict that cybersickness produced by a VE is a kind of visually induced motion sickness. A comprehensive review indicates that there is only one subjective measure to quantify visual stimuli presented inside a VE. A metric, referred to as spatial velocity (SV), is proposed. It combines objective measures of scene complexity and scene movement velocity. The theoretical basis for the proposed SV metric and the algorithms for its implementation are presented. Data from two previous experiments on cybersickness were reanalyzed using the metric. Results showed that increasing SV by either increasing the scene complexity or scene velocity significantly increased the rated level of cybersickness. A strong correlation between SV and the level of cybersickness was found. The use of the spatial velocity metric to predict levels of cybersickness is also discussed.", "title": "" }, { "docid": "26eb8fc38928446194d0110aca3a8b9c", "text": "The requirement for high quality pulps which are widely used in paper industries has increased the demand for pulp refining (beating) process. Pulp refining is a promising approach to improve the pulp quality by changing the fiber characteristics. The diversity of research on the effect of refining on fiber properties which is due to the different pulp sources, pulp consistency and refining equipment has interested us to provide a review on the studies over the last decade. In this article, the influence of pulp refining on structural properties i.e., fibrillations, fine formation, fiber length, fiber curl, crystallinity and distribution of surface chemical compositions is reviewed. The effect of pulp refining on electrokinetic properties of fiber e.g., surface and total charges of pulps is discussed. In addition, an overview of different refining theories, refiners as well as some tests for assessing the pulp refining is presented.", "title": "" }, { "docid": "240c47d27533069f339d8eb090a637a9", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "a9399439831a970fcce8e0101696325f", "text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.", "title": "" }, { "docid": "d3a0931c03c80f5aa639cdc0d8cc331b", "text": "We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.", "title": "" }, { "docid": "574f1eb961c4469a16b4fde10d455ff4", "text": "To study the fundamental effects of the spinning capsule on the overall performance of a dry powder inhaler (Aerolizer®). The capsule motion was visualized using high-speed photography. Computational fluid dynamics (CFD) analysis was performed to determine the flowfield generated in the device with and without the presence of different sized capsules at 60 l min−1. The inhaler dispersion performance was measured with mannitol powder using a multistage liquid impinger at the same flowrate. The capsule size (3, 4, and 5) was found to make no significant difference to the device flowfield, the particle-device impaction frequency, or the dispersion performance of the inhaler. Reducing the capsule size reduced only the capsule retention by 4%. In contrast, without the presence of the spinning capsule, turbulence levels were increased by 65%, FPFEm (wt% particles ≤6.8 μm in the aerosol referenced against the amount of powder emitted from the device) increased from 59% to 65%, while particle-mouthpiece impaction decreased by 2.5 times. When the powder was dispersed from within compared to from outside the spinning capsule containing four 0.6 mm holes at each end, the FPFEm was increased significantly from 59% to 76%, and the throat retention was dropped from 14% to 6%. The presence, but not the size, of a capsule has significant effects on the inhaler performance. The results suggested that impaction between the particles and the spinning capsule does not play a major role in powder dispersion. However, the capsule can provide additional strong mechanisms of deagglomeration dependent on the size of the capsule hole.", "title": "" } ]
scidocsrr
266bd9346ae3016067c36dcb68031cca
Image encryption using chaotic logistic map
[ { "docid": "fc9eae18a5a44ee7df22d6c7bdb5a164", "text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.", "title": "" } ]
[ { "docid": "d8a68a9e769f137e06ab05e4d4075dce", "text": "The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.", "title": "" }, { "docid": "9150005965c893e6c2efa15c469fdffb", "text": "Low power has emerged as a principal theme in today's electronics industry. The need for low power has caused a major paradigm shift in which power dissipation is as important as performance and area. This article presents an in-depth survey of CAD methodologies and techniques for designing low power digital CMOS circuits and systems and describes the many issues facing designers at architectural, logical, and physical levels of design abstraction. It reviews some of the techniques and tools that have been proposed to overcome these difficulties and outlines the future challenges that must be met to design low power, high performance systems.", "title": "" }, { "docid": "6558b2a3c43e11d58f3bb829425d6a8d", "text": "While end-to-end neural conversation models have led to promising advances in reducing hand-crafted features and errors induced by the traditional complex system architecture, they typically require an enormous amount of data due to the lack of modularity. Previous studies adopted a hybrid approach with knowledge-based components either to abstract out domainspecific information or to augment data to cover more diverse patterns. On the contrary, we propose to directly address the problem using recent developments in the space of continual learning for neural models. Specifically, we adopt a domainindependent neural conversational model and introduce a novel neural continual learning algorithm that allows a conversational agent to accumulate skills across different tasks in a data-efficient way. To the best of our knowledge, this is the first work that applies continual learning to conversation systems. We verified the efficacy of our method through a conversational skill transfer from either synthetic dialogs or human-human dialogs to human-computer conversations in a customer support domain.", "title": "" }, { "docid": "435200b067ebd77f69a04cc490d73fa6", "text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.", "title": "" }, { "docid": "c2891abf8297b5dcf0e21dfa9779a017", "text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.", "title": "" }, { "docid": "4d4de3ff3c99779c7fd5bd60fc006189", "text": "With the fast growing information technologies, high efficiency AC-DC front-end power supplies are becoming more and more desired in all kinds of distributed power system applications due to the energy conservation consideration. For the power factor correction (PFC) stage, the conventional constant frequency average current mode control has very low efficiency at light load due to high switching frequency related loss. The constant on-time control for PFC features the automatic reduction of switching frequency at light load, resulting improved light load efficiency. However, lower heavy load efficiency of the constant on-time control is observed because of very high frequency at Continuous Conduction Mode (CCM). By carefully comparing the on-time and frequency profiles between constant on-time and constant frequency control, a novel adaptive on-time control is proposed to improve the light load efficiency without sacrificing the heavy load efficiency. The performance of the adaptive on-time control is verified by experiment.", "title": "" }, { "docid": "aba4e6baa69a2ca7d029ebc33931fd4d", "text": "Along with the improvement of radar technologies Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) has come to be an active research area. SAR/ISAR are radar techniques to generate a two-dimensional high-resolution image of a target. Unlike other similar experiments using Convolutional Neural Networks (CNN) to solve this problem, we utilize an unusual approach that leads to better performance and faster training times. Our CNN uses complex values generated by a simulation to train the network; additionally, we utilize a multi-radar approach to increase the accuracy of the training and testing processes, thus resulting in higher accuracies than the other papers working on SAR/ISAR ATR. We generated our dataset with 7 different aircraft models with a radar simulator we developed called RadarPixel; it is a Windows GUI program implemented using Matlab and Java programing, the simulator is capable of accurately replicating a real SAR/ISAR configurations. Our objective is utilize our multiradar technique and determine the optimal number of radars needed to detect and classify targets.", "title": "" }, { "docid": "74f8127bc620fa1c9797d43dedea4d45", "text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.", "title": "" }, { "docid": "63ed24b818f83ab04160b5c690075aac", "text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.", "title": "" }, { "docid": "08f49b003a3a5323e38e4423ba6503a4", "text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.", "title": "" }, { "docid": "6ea4ecb12ca077c07f4706b6d11130db", "text": "We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions on rectifier networks, which are exact for inputs of dimension one; (ii) a first upper bound for multi-layer maxout networks; and (iii) a first method to perform exact enumeration or counting of the number of regions by modeling the DNN with a mixed-integer linear formulation. These bounds come from leveraging the dimension of the space defining each linear region. The results also indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.", "title": "" }, { "docid": "cc2e24cd04212647f1c29482aa12910d", "text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.", "title": "" }, { "docid": "7b1a6768cc6bb975925a754343dc093c", "text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.", "title": "" }, { "docid": "53ebcdf1dfb5b850228ac422fdd50490", "text": "A frequent goal of flow cytometric analysis is to classify cells as positive or negative for a given marker, or to determine the precise ratio of positive to negative cells. This requires good and reproducible instrument setup, and careful use of controls for analyzing and interpreting the data. The type of controls to include in various kinds of flow cytometry experiments is a matter of some debate and discussion. In this tutorial, we classify controls in various categories, describe the options within each category, and discuss the merits of each option.", "title": "" }, { "docid": "e28f2a2d5f3a0729943dca52da5d45b6", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframebased, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Furthermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.", "title": "" }, { "docid": "dbcdcd2cdf8894f853339b5fef876dde", "text": "Genicular nerve radiofrequency ablation (RFA) has recently gained popularity as an intervention for chronic knee pain in patients who have failed other conservative or surgical treatments. Long-term efficacy and adverse events are still largely unknown. Under fluoroscopic guidance, thermal RFA targets the lateral superior, medial superior, and medial inferior genicular nerves, which run in close proximity to the genicular arteries that play a crucial role in supplying the distal femur, knee joint, meniscus, and patella. RFA targets nerves by relying on bony landmarks, but fails to provide visualization of vascular structures. Although vascular injuries after genicular nerve RFA have not been reported, genicular vascular complications are well documented in the surgical literature. This article describes the anatomy, including detailed cadaveric dissections and schematic drawings, of the genicular neurovascular bundle. The present investigation also included a comprehensive literature review of genicular vascular injuries involving those arteries which lie near the targets of genicular nerve RFA. These adverse vascular events are documented in the literature as case reports. Of the 27 cases analyzed, 25.9% (7/27) involved the lateral superior genicular artery, 40.7% (11/27) involved the medial superior genicular artery, and 33.3% (9/27) involved the medial inferior genicular artery. Most often, these vascular injuries result in the formation of pseudoaneurysm, arteriovenous fistula (AVF), hemarthrosis, and/or osteonecrosis of the patella. Although rare, these complications carry significant morbidities. Based on the detailed dissections and review of the literature, our investigation suggests that vascular injury is a possible risk of genicular RFA. Lastly, recommendations are offered to minimize potential iatrogenic complications.", "title": "" }, { "docid": "c9d95b3656c703f4ce49c591a3f0a00f", "text": "Due to cellular heterogeneity, cell nuclei classification, segmentation, and detection from pathological images are challenging tasks. In the last few years, Deep Convolutional Neural Networks (DCNN) approaches have been shown state-of-the-art (SOTA) performance on histopathological imaging in different studies. In this work, we have proposed different advanced DCNN models and evaluated for nuclei classification, segmentation, and detection. First, the Densely Connected Recurrent Convolutional Network (DCRN) model is used for nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied for nuclei segmentation. Third, the R2U-Net regression model which is named UD-Net is used for nuclei detection from pathological images. The experiments are conducted with different datasets including Routine Colon Cancer(RCC) classification and detection dataset, and Nuclei Segmentation Challenge 2018 dataset. The experimental results show that the proposed DCNN models provide superior performance compared to the existing approaches for nuclei classification, segmentation, and detection tasks. The results are evaluated with different performance metrics including precision, recall, Dice Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy. We have achieved around 3.4% and 4.5% better F-1 score for nuclei classification and detection tasks compared to recently published DCNN based method. In addition, R2U-Net shows around 92.15% testing accuracy in term of DC. These improved methods will help for pathological practices for better quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately will help for better understanding of different types of cancer in clinical workflow.", "title": "" }, { "docid": "165fbade7d495ce47a379520697f0d75", "text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.", "title": "" } ]
scidocsrr
8913aeaeb31812ab614555aa4dc52714
Sleep timing is more important than sleep length or quality for medical school performance.
[ { "docid": "5a1b5f961bf6ed78cff2df6e2ed2d212", "text": "The transition from wakefulness to sleep is marked by pronounced changes in brain activity. The brain rhythms that characterize the two main types of mammalian sleep, slow-wave sleep (SWS) and rapid eye movement (REM) sleep, are thought to be involved in the functions of sleep. In particular, recent theories suggest that the synchronous slow-oscillation of neocortical neuronal membrane potentials, the defining feature of SWS, is involved in processing information acquired during wakefulness. According to the Standard Model of memory consolidation, during wakefulness the hippocampus receives input from neocortical regions involved in the initial encoding of an experience and binds this information into a coherent memory trace that is then transferred to the neocortex during SWS where it is stored and integrated within preexisting memory traces. Evidence suggests that this process selectively involves direct connections from the hippocampus to the prefrontal cortex (PFC), a multimodal, high-order association region implicated in coordinating the storage and recall of remote memories in the neocortex. The slow-oscillation is thought to orchestrate the transfer of information from the hippocampus by temporally coupling hippocampal sharp-wave/ripples (SWRs) and thalamocortical spindles. SWRs are synchronous bursts of hippocampal activity, during which waking neuronal firing patterns are reactivated in the hippocampus and neocortex in a coordinated manner. Thalamocortical spindles are brief 7-14 Hz oscillations that may facilitate the encoding of information reactivated during SWRs. By temporally coupling the readout of information from the hippocampus with conditions conducive to encoding in the neocortex, the slow-oscillation is thought to mediate the transfer of information from the hippocampus to the neocortex. Although several lines of evidence are consistent with this function for mammalian SWS, it is unclear whether SWS serves a similar function in birds, the only taxonomic group other than mammals to exhibit SWS and REM sleep. Based on our review of research on avian sleep, neuroanatomy, and memory, although involved in some forms of memory consolidation, avian sleep does not appear to be involved in transferring hippocampal memories to other brain regions. Despite exhibiting the slow-oscillation, SWRs and spindles have not been found in birds. Moreover, although birds independently evolved a brain region--the caudolateral nidopallium (NCL)--involved in performing high-order cognitive functions similar to those performed by the PFC, direct connections between the NCL and hippocampus have not been found in birds, and evidence for the transfer of information from the hippocampus to the NCL or other extra-hippocampal regions is lacking. Although based on the absence of evidence for various traits, collectively, these findings suggest that unlike mammalian SWS, avian SWS may not be involved in transferring memories from the hippocampus. Furthermore, it suggests that the slow-oscillation, the defining feature of mammalian and avian SWS, may serve a more general function independent of that related to coordinating the transfer of information from the hippocampus to the PFC in mammals. Given that SWS is homeostatically regulated (a process intimately related to the slow-oscillation) in mammals and birds, functional hypotheses linked to this process may apply to both taxonomic groups.", "title": "" }, { "docid": "06e74a431b45aec75fb21066065e1353", "text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.", "title": "" }, { "docid": "ec36f7ad0a916ab4040b0fddbf7b1172", "text": "To review the state of research on the association between sleep among school-aged children and academic outcomes, the authors reviewed published studies investigating sleep, school performance, and cognitive and achievement tests. Tables with brief descriptions of each study's research methods and outcomes are included. Research reveals a high prevalence among school-aged children of suboptimal amounts of sleep and poor sleep quality. Research demonstrates that suboptimal sleep affects how well students are able to learn and how it may adversely affect school performance. Recommendations for further research are discussed.", "title": "" } ]
[ { "docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "9680944f9e6b4724bdba752981845b68", "text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.", "title": "" }, { "docid": "62c4ad2cdd38d8ab8e08bd6636cb3e09", "text": "When modeling resonant inverters considering the harmonic balance method, the order of the obtained transfer functions is twice the state variables number. This is explained because two components are considered for each state variable. In order to obtain a simpler transfer function model of a halfbridge series resonant inverter, different techniques of model order reduction have been considered in this work. Thus, a reduced-order model has been obtained by residualization providing much simpler analytical expressions than the original model. The proposed model has been validated by simulation and experimentally. The validity range of the proposed model is extended up to a tenth of the switching frequency. Taking into account the great load variability of induction heating applications, the proposed reduced-order model will allow the design of advanced controllers such as Gain-Scheduling.", "title": "" }, { "docid": "3f9a46f472ab276c39fb96b78df132ee", "text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.", "title": "" }, { "docid": "998f2515ea7ceb02f867b709d4a987f9", "text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.", "title": "" }, { "docid": "42f3032626b2a002a855476a718a2b1b", "text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.", "title": "" }, { "docid": "7faed0b112a15a3b53c94df44a1bcb26", "text": "Since the stability of the method of fundamental solutions (MFS) is a severe issue, the estimation on the bounds of condition number Cond is important to real application. In this paper, we propose the new approaches for deriving the asymptotes of Cond, and apply them for the Dirichlet problem of Laplace’s equation, to provide the sharp bound of Cond for disk domains. Then the new bound of Cond is derived for bounded simply connected domains with mixed types of boundary conditions. Numerical results are reported for Motz’s problem by adding singular functions. The values of Cond grow exponentially with respect to the number of fundamental solutions used. Note that there seems to exist no stability analysis for the MFS on non-disk (or non-elliptic) domains. Moreover, the expansion coefficients obtained by the MFS are oscillatingly large, to cause the other kind of instability: subtraction cancelation errors in the final harmonic solutions.", "title": "" }, { "docid": "4e8d7e1fdb48da4198e21ae1ef2cd406", "text": "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95% compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pretrain action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics [14], UCF-101 [30] and HMDB-51 [15], models pre-trained on SLAC outperform baselines trained from scratch, by 2.0%, 20.1% and 35.4% in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 [12] and ActivityNet-v1.3[2], our localization model improves the mAP of baseline model by 8.6% and 2.5%, respectively.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "0ee09adae30459337f8e7261165df121", "text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.", "title": "" }, { "docid": "9b94a383b2a6e778513a925cc88802ad", "text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.", "title": "" }, { "docid": "4f8a233a8de165f2aeafbad9c93a767a", "text": "Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.", "title": "" }, { "docid": "35dacb4b15e5c8fbd91cee6da807799a", "text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.", "title": "" }, { "docid": "5b1c38fccbd591e6ab00a66ef636eb5d", "text": "There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed ‘big data’. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mechanism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the existing approaches using Brewer’s CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.", "title": "" }, { "docid": "68f0bdda44beba9203a785b8be1035bb", "text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.", "title": "" }, { "docid": "b2ad81e0c7e352dac4caea559ac675bb", "text": "A linearly polarized miniaturized printed dipole antenna with novel half bowtie radiating arm is presented for wireless applications including the 2.4 GHz ISM band. This design is approximately 0.363 λ in length at central frequency of 2.97 GHz. An integrated balun with inductive transitions is employed for wideband impedance matching without changing the geometry of radiating arms. This half bowtie dipole antenna displays 47% bandwidth, and a simulated efficiency of over 90% with miniature size. The radiation patterns are largely omnidirectional and display a useful level of measured gain across the impedance bandwidth. The size and performance of the miniaturized half bowtie dipole antenna is compared with similar reduced size antennas with respect to their overall footprint, substrate dielectric constant, frequency of operation and impedance bandwidth. This half bowtie design in this communication outperforms the reference antennas in virtually all categories.", "title": "" }, { "docid": "86a3a5f09181567c5b66d926b0f9d240", "text": "Indigenous \"First Nations\" communities have consistently associated their disproportionate rates of psychiatric distress with historical experiences of European colonization. This emphasis on the socio-psychological legacy of colonization within tribal communities has occasioned increasingly widespread consideration of what has been termed historical trauma within First Nations contexts. In contrast to personal experiences of a traumatic nature, the concept of historical trauma calls attention to the complex, collective, cumulative, and intergenerational psychosocial impacts that resulted from the depredations of past colonial subjugation. One oft-cited exemplar of this subjugation--particularly in Canada--is the Indian residential school. Such schools were overtly designed to \"kill the Indian and save the man.\" This was institutionally achieved by sequestering First Nations children from family and community while forbidding participation in Native cultural practices in order to assimilate them into the lower strata of mainstream society. The case of a residential school \"survivor\" from an indigenous community treatment program on a Manitoba First Nations reserve is presented to illustrate the significance of participation in traditional cultural practices for therapeutic recovery from historical trauma. An indigenous rationale for the postulated efficacy of \"culture as treatment\" is explored with attention to plausible therapeutic mechanisms that might account for such recovery. To the degree that a return to indigenous tradition might benefit distressed First Nations clients, redressing the socio-psychological ravages of colonization in this manner seems a promising approach worthy of further research investigation.", "title": "" }, { "docid": "ef925e9d448cf4ca9a889b5634b685cf", "text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.", "title": "" }, { "docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc", "text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting", "title": "" } ]
scidocsrr
bfc22d978100eb5b81880d8850ca33a6
An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology.
[ { "docid": "4287db8deb3c4de5d7f2f5695c3e2e70", "text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.", "title": "" } ]
[ { "docid": "bfcb1fd882a328daab503a7dd6b6d0a6", "text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.", "title": "" }, { "docid": "c8dae180aae646bf00e202bd24f15f59", "text": "Massively Multiplayer Online Games (MMOGs) continue to be a popular and lucrative sector of the gaming market. Project Massive was created to assess MMOG players' social experiences both inside and outside of their gaming environments and the impact of these activities on their everyday lives. The focus of Project Massive has been on the persistent player groups or \"guilds\" that form in MMOGs. The survey has been completed online by 1836 players, who reported on their play patterns, commitment to their player organizations, and personality traits like sociability, extraversion and depression. Here we report our cross-sectional findings and describe our future longitudinal work as we track players and their guilds across the evolving landscape of the MMOG product space.", "title": "" }, { "docid": "f613a2ed6f64c469cf1180d1e8fe9e4a", "text": "We describe an estimation technique which, given a measurement of the depth of a target from a wide-fieldof-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical cap ture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoreticalcapture probability andempiricalcapture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.", "title": "" }, { "docid": "e69acc779b3bd736c0e5bd6962c8d459", "text": "The genome-wide transcriptome profiling of cancerous and normal tissue samples can provide insights into the molecular mechanisms of cancer initiation and progression. RNA Sequencing (RNA-Seq) is a revolutionary tool that has been used extensively in cancer research. However, no existing RNA-Seq database provides all of the following features: (i) large-scale and comprehensive data archives and analyses, including coding-transcript profiling, long non-coding RNA (lncRNA) profiling and coexpression networks; (ii) phenotype-oriented data organization and searching and (iii) the visualization of expression profiles, differential expression and regulatory networks. We have constructed the first public database that meets these criteria, the Cancer RNA-Seq Nexus (CRN, http://syslab4.nchu.edu.tw/CRN). CRN has a user-friendly web interface designed to facilitate cancer research and personalized medicine. It is an open resource for intuitive data exploration, providing coding-transcript/lncRNA expression profiles to support researchers generating new hypotheses in cancer research and personalized medicine.", "title": "" }, { "docid": "da1990ef0bb7ca5e184c32f33a0a8799", "text": "Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.", "title": "" }, { "docid": "cd12564b6875ddc972334f45bbf41ab9", "text": "Purpose – The purpose of this paper is to review the literature on Total Productive Maintenance (TPM) and to present an overview of TPM implementation practices adopted by the manufacturing organizations. It also seeks to highlight appropriate enablers and success factors for eliminating barriers in successful TPM implementation. Design/methodology/approach – The paper systematically categorizes the published literature and then analyzes and reviews it methodically. Findings – The paper reveals the important issues in Total Productive Maintenance ranging from maintenance techniques, framework of TPM, overall equipment effectiveness (OEE), TPM implementation practices, barriers and success factors in TPM implementation, etc. The contributions of strategic TPM programmes towards improving manufacturing competencies of the organizations have also been highlighted here. Practical implications – The literature on classification of Total Productive Maintenance has so far been very limited. The paper reviews a large number of papers in this field and presents the overview of various TPM implementation practices demonstrated by manufacturing organizations globally. It also highlights the approaches suggested by various researchers and practitioners and critically evaluates the reasons behind failure of TPM programmes in the organizations. Further, the enablers and success factors for TPM implementation have also been highlighted for ensuring smooth and effective TPM implementation in the organizations. Originality/value – The paper contains a comprehensive listing of publications on the field in question and their classification according to various attributes. It will be useful to researchers, maintenance professionals and others concerned with maintenance to understand the significance of TPM.", "title": "" }, { "docid": "3d0b50111f6c9168b8a269a7d99d8fbc", "text": "Detecting lies is crucial in many areas, such as airport security, police investigations, counter-terrorism, etc. One technique to detect lies is through the identification of facial micro-expressions, which are brief, involuntary expressions shown on the face of humans when they are trying to conceal or repress emotions. Manual measurement of micro-expressions is hard labor, time consuming, and inaccurate. This paper presents the Design and Development of a Lie Detection System using Facial Micro-Expressions. It is an automated vision system designed and implemented using LabVIEW. An Embedded Vision System (EVS) is used to capture the subject's interview. Then, a LabVIEW program converts the video into series of frames and processes the frames, each at a time, in four consecutive stages. The first two stages deal with color conversion and filtering. The third stage applies geometric-based dynamic templates on each frame to specify key features of the facial structure. The fourth stage extracts the needed measurements in order to detect facial micro-expressions to determine whether the subject is lying or not. Testing results show that this system can be used for interpreting eight facial expressions: happiness, sadness, joy, anger, fear, surprise, disgust, and contempt, and detecting facial micro-expressions. It extracts accurate output that can be employed in other fields of studies such as psychological assessment. The results indicate high precision that allows future development of applications that respond to spontaneous facial expressions in real time.", "title": "" }, { "docid": "d94a4f07939c0f420787b099336f426b", "text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.", "title": "" }, { "docid": "92d047856fdf20b41c4f673aae2ced66", "text": "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.", "title": "" }, { "docid": "cd863a82161f4b28cc43eeda21e01a65", "text": "Face aging, which renders aging faces for an input face, has attracted extensive attention in the multimedia research. Recently, several conditional Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate images fitting the real face distributions conditioned on each individual age group. However, these methods fail to capture the transition patterns, e.g., the gradual shape and texture changes between adjacent age groups. In this paper, we propose a novel Contextual Generative Adversarial Nets (C-GANs) to specifically take it into consideration. The C-GANs consists of a conditional transformation network and two discriminative networks. The conditional transformation network imitates the aging procedure with several specially designed residual blocks. The age discriminative network guides the synthesized face to fit the real conditional distribution. The transition pattern discriminative network is novel, aiming to distinguish the real transition patterns with the fake ones. It serves as an extra regularization term for the conditional transformation network, ensuring the generated image pairs to fit the corresponding real transition pattern distribution. Experimental results demonstrate the proposed framework produces appealing results by comparing with the state-of-the-art and ground truth. We also observe performance gain for cross-age face verification.", "title": "" }, { "docid": "7c2960e9fd059e57b5a0172e1d458250", "text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.", "title": "" }, { "docid": "2903e8be6b9a3f8dc818a57197ec1bee", "text": "A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.", "title": "" }, { "docid": "e32c8589a92a92ab8fd876bb760fb98e", "text": "The importance of the social sciences for medical informatics is increasingly recognized. As ICT requires inter-action with people and thereby inevitably affects them, understanding ICT requires a focus on the interrelation between technology and its social environment. Sociotechnical approaches increase our understanding of how ICT applications are developed, introduced and become a part of social practices. Socio-technical approaches share several starting points: 1) they see health care work as a social, 'real life' phenomenon, which may seem 'messy' at first, but which is guided by a practical rationality that can only be overlooked at a high price (i.e. failed systems). 2) They see technological innovation as a social process, in which organizations are deeply affected. 3) Through in-depth, formative evaluation, they can help improve system design and implementation.", "title": "" }, { "docid": "0ff27e119ec045674b9111bb5a9e5d29", "text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.", "title": "" }, { "docid": "cff0b5c06b322c887aed9620afeac668", "text": "In addition to providing substantial performance enhancements, future 5G networks will also change the mobile network ecosystem. Building on the network slicing concept, 5G allows to “slice” the network infrastructure into separate logical networks that may be operated independently and targeted at specific services. This opens the market to new players: the infrastructure provider, which is the owner of the infrastructure, and the tenants, which may acquire a network slice from the infrastructure provider to deliver a specific service to their customers. In this new context, we need new algorithms for the allocation of network resources that consider these new players. In this paper, we address this issue by designing an algorithm for the admission and allocation of network slices requests that (i) maximises the infrastructure provider's revenue and (ii) ensures that the service guarantees provided to tenants are satisfied. Our key contributions include: (i) an analytical model for the admissibility region of a network slicing-capable 5G Network, (ii) the analysis of the system (modelled as a Semi-Markov Decision Process) and the optimisation of the infrastructure provider's revenue, and (iii) the design of an adaptive algorithm (based on Q-learning) that achieves close to optimal performance.", "title": "" }, { "docid": "b9720d1350bf89c8a94bb30276329ce2", "text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.", "title": "" }, { "docid": "adad5599122e63cde59322b7ba46461b", "text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.", "title": "" }, { "docid": "1be35b9562a428a7581541559dc16bd8", "text": "OBJECTIVE\nTo assess the effect of virtual reality training on an actual laparoscopic operation.\n\n\nDESIGN\nProspective randomised controlled and blinded trial.\n\n\nSETTING\nSeven gynaecological departments in the Zeeland region of Denmark.\n\n\nPARTICIPANTS\n24 first and second year registrars specialising in gynaecology and obstetrics.\n\n\nINTERVENTIONS\nProficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls).\n\n\nMAIN OUTCOME MEASURE\nThe main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes.\n\n\nRESULTS\nThe simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers' inter-rater agreement was 0.79.\n\n\nCONCLUSION\nSkills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT00311792.", "title": "" }, { "docid": "a7accee00559a544a3715acacffdd37d", "text": "Engagement is complex and multifaceted, but crucial to learning. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement) and adapting to it. This paper describes results from several previous studies that utilized facial features to automatically detect student engagement, and proposes new methods to expand and improve results. Videos of students will be annotated by third-party observers as mind wandering (disengaged) or not mind wandering (engaged). Automatic detectors will also be trained to classify the same videos based on students' facial features, and compared to the machine predictions. These detectors will then be improved by engineering features to capture facial expressions noted by observers and more heavily weighting training instances that were exceptionally-well classified by observers. Finally, implications of previous results and proposed work are discussed.", "title": "" }, { "docid": "c1338abb3ddd4acb1ba7ed7ac0c4452c", "text": "Defect prediction models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction models. Prior research compares the impact of class rebalancing techniques on the performance of defect prediction models. Prior research efforts arrive at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect prediction models. In this paper, we investigate the impact of 4 popularly-used class rebalancing techniques on 10 commonly-used performance measures and the interpretation of defect prediction models. We also construct statistical models to better understand in which experimental design settings that class rebalancing techniques are beneficial for defect prediction models. Through a case study of 101 datasets that span across proprietary and open-source systems, we recommend that class rebalancing techniques are necessary when quality assurance teams wish to increase the completeness of identifying software defects (i.e., Recall). However, class rebalancing techniques should be avoided when interpreting defect prediction models. We also find that class rebalancing techniques do not impact the AUC measure. Hence, AUC should be used as a standard measure when comparing defect prediction models.", "title": "" } ]
scidocsrr
ff57c158d0058d8f5b16f4049ec0210d
Supply Chain Contracting Under Competition : Bilateral Bargaining vs . Stackelberg
[ { "docid": "6559d77de48d153153ce77b0e2969793", "text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.", "title": "" } ]
[ { "docid": "d0c5d24a5f68eb5448b45feeca098b87", "text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.", "title": "" }, { "docid": "211b858db72c962efaedf66f2ed9479d", "text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.", "title": "" }, { "docid": "0241cef84d46b942ee32fc7345874b90", "text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.", "title": "" }, { "docid": "f3f4cb6e7e33f54fca58c14ce82d6b46", "text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.", "title": "" }, { "docid": "dea6ad0e1985260dbe7b70cef1c5da54", "text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.", "title": "" }, { "docid": "579536fe3f52f4ed244f06210a9c2cd1", "text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.", "title": "" }, { "docid": "793d41551a918a113f52481ff3df087e", "text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.", "title": "" }, { "docid": "ba75caedb1c9e65f14c2764157682bdf", "text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.", "title": "" }, { "docid": "5ca75490c015685a1fc670b2ee5103ff", "text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.", "title": "" }, { "docid": "c3b6d46a9e1490c720056682328586d5", "text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.", "title": "" }, { "docid": "d8b2294b650274fc0269545296504432", "text": "The multidisciplinary nature of information privacy research poses great challenges, since many concepts of information privacy have only been considered and developed through the lens of a particular discipline. It was our goal to conduct a multidisciplinary literature review. Following the three-stage approach proposed by Webster and Watson (2002), our methodology for identifying information privacy publications proceeded in three stages.", "title": "" }, { "docid": "52ebf28afd8ae56816fb81c19e8890b6", "text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?", "title": "" }, { "docid": "40479536efec6311cd735f2bd34605d7", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "68d8834770c34450adc96ed96299ae48", "text": "This thesis presents a current-mode CMOS image sensor using lateral bipolar phototransistors (LPTs). The objective of this design is to improve the photosensitivity of the image sensor, and to provide photocurrent amplification at the circuit level. Lateral bipolar phototransistors can be implemented using a standard CMOS technology with no process modification. Under illumination, photogenerated carriers contribute to the base current, and the output emitter current is amplified through the transistor action of the bipolar device. Our analysis and simulation results suggest that the LPT output characteristics are strongly dependent on process parameters including base and emitter doping concentrations, as well as the device geometry such as the base width. For high current gain, a minimized base width is desired. The 2D effect of current crowding has also been discussed. Photocurrent can be further increased using amplifying current mirrors in the pixel and column structures. A prototype image sensor has been designed and fabricated in a standard 0.18μm CMOS technology. This design includes a photodiode image array and a LPT image array, each 70× 48 in dimension. For both arrays, amplifying current mirrors are included in the pixel readout structure and at the column level. Test results show improvements in both photosensitivity and conversion efficiency. The LPT also exhibits a better spectral response in the red region of the spectrum, because of the nwell/p-substrate depletion region. On the other hand, dark current, fixed pattern noise (FPN), and power consumption also increase due to current amplification. This thesis has demonstrated that the use of lateral bipolar phototransistors and amplifying current mirrors can help to overcome low photosensitivity and other deterioration imposed by technology scaling. The current-mode readout scheme with LPT-based photodetectors can be used as a front end to additional image processing circuits.", "title": "" }, { "docid": "335220bbad7798a19403d393bcbbf7fb", "text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.", "title": "" }, { "docid": "d8eab1f244bd5f9e05eb706bb814d299", "text": "Private participation in road projects is increasing around the world. The most popular franchising mechanism is a concession contract, which allows a private firm to charge tolls to road users during a pre-determined period in order to recover its investments. Concessionaires are usually selected through auctions at which candidates submit bids for tolls, payments to the government, or minimum term to hold the contract. This paper discusses, in the context of road franchising, how this mechanism does not generally yield optimal outcomes and it induces the frequent contract renegotiations observed in road projects. A new franchising mechanism is proposed, based on flexible-term contracts and auctions with bids for total net revenue and maintenance costs. This new mechanism improves outcomes compared to fixed-term concessions, by eliminating traffic risk and promoting the selection of efficient concessionaires.", "title": "" }, { "docid": "155de33977b33d2f785fd86af0aa334f", "text": "Model-based analysis tools, built on assumptions and simplifications, are difficult to handle smart grids with data characterized by volume, velocity, variety, and veracity (i.e., 4Vs data). This paper, using random matrix theory (RMT), motivates data-driven tools to perceive the complex grids in high-dimension; meanwhile, an architecture with detailed procedures is proposed. In algorithm perspective, the architecture performs a high-dimensional analysis and compares the findings with RMT predictions to conduct anomaly detections. Mean spectral radius (MSR), as a statistical indicator, is defined to reflect the correlations of system data in different dimensions. In management mode perspective, a group-work mode is discussed for smart grids operation. This mode breaks through regional limitations for energy flows and data flows, and makes advanced big data analyses possible. For a specific large-scale zone-dividing system with multiple connected utilities, each site, operating under the group-work mode, is able to work out the regional MSR only with its own measured/simulated data. The large-scale interconnected system, in this way, is naturally decoupled from statistical parameters perspective, rather than from engineering models perspective. Furthermore, a comparative analysis of these distributed MSRs, even with imperceptible different raw data, will produce a contour line to detect the event and locate the source. It demonstrates that the architecture is compatible with the block calculation only using the regional small database; beyond that, this architecture, as a data-driven solution, is sensitive to system situation awareness, and practical for real large-scale interconnected systems. Five case studies and their visualizations validate the designed architecture in various fields of power systems. To our best knowledge, this paper is the first attempt to apply big data technology into smart grids.", "title": "" }, { "docid": "e75f830b902ca7d0e8d9e9fa03a62440", "text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.", "title": "" }, { "docid": "f96098449988c433fe8af20be0c468a5", "text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.", "title": "" }, { "docid": "546296aecaee9963ee7495c9fbf76fd4", "text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.", "title": "" } ]
scidocsrr
7a4fcb24bbaec04b6699f8dd33a65836
Mental Health Problems in University Students : A Prevalence Study
[ { "docid": "1497e47ada570797e879bbc4aba432a1", "text": "The mental health of university students is an area of increasing concern worldwide. The objective of this study is to examine the prevalence of depression, anxiety and stress among a group of Turkish university students. Depression Anxiety and Stress Scale (DASS-42) completed anonymously in the students’ respective classrooms by 1,617 students. Depression, anxiety and stress levels of moderate severity or above were found in 27.1, 47.1 and 27% of our respondents, respectively. Anxiety and stress scores were higher among female students. First- and second-year students had higher depression, anxiety and stress scores than the others. Students who were satisfied with their education had lower depression, anxiety and stress scores than those who were not satisfied. The high prevalence of depression, anxiety and stress symptoms among university students is alarming. This shows the need for primary and secondary prevention measures, with the development of adequate and appropriate support services for this group.", "title": "" } ]
[ { "docid": "0ef6e54d7190dde80ee7a30c5ecae0c3", "text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.", "title": "" }, { "docid": "0fba05a38cb601a1b08e6105e6b949c1", "text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.", "title": "" }, { "docid": "f1df8b69dfec944b474b9b26de135f55", "text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.", "title": "" }, { "docid": "28d19824a598ae20039f2ed5d8885234", "text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.", "title": "" }, { "docid": "a574355d46c6e26efe67aefe2869a0cb", "text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.", "title": "" }, { "docid": "02e63f2279dbd980c6689bec5ea18411", "text": "Reflection photoplethysmography (PPG) using 530 nm (green) wavelength light has the potential to be a superior method for monitoring heart rate (HR) during normal daily life due to its relative freedom from artifacts. However, little is known about the accuracy of pulse rate (PR) measured by 530 nm light PPG during motion. Therefore, we compared the HR measured by electrocadiography (ECG) as a reference with PR measured by 530, 645 (red), and 470 nm (blue) wavelength light PPG during baseline and while performing hand waving in 12 participants. In addition, we examined the change of signal-to-noise ratio (SNR) by motion for each of the three wavelengths used for the PPG. The results showed that the limit of agreement in Bland-Altman plots between the HR measured by ECG and PR measured by 530 nm light PPG (±0.61 bpm) was smaller than that achieved when using 645 and 470 nm light PPG (±3.20 bpm and ±2.23 bpm, respectively). The ΔSNR (the difference between baseline and task values) of 530 and 470nm light PPG was significantly smaller than ΔSNR for red light PPG. In conclusion, 530 nm light PPG could be a more suitable method than 645 and 470nm light PPG for monitoring HR in normal daily life.", "title": "" }, { "docid": "5ccf0b3f871f8362fccd4dbd35a05555", "text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.", "title": "" }, { "docid": "736ee2bed70510d77b1f9bb13b3bee68", "text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.", "title": "" }, { "docid": "c60c83c93577377bad43ed1972079603", "text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module", "title": "" }, { "docid": "57e9467bfbc4e891acd00dcdac498e0e", "text": "Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective \"we\" can alter spontaneous judgments of similarity and self-descriptions.", "title": "" }, { "docid": "e50c921d664f970daa8050bad282e066", "text": "In the complex decision-environments that characterize e-business settings, it is important to permit decision-makers to proactively manage data quality. In this paper we propose a decision-support framework that permits decision-makers to gauge quality both in an objective (context-independent) and in a context-dependent manner. The framework is based on the information product approach and uses the Information Product Map (IPMAP). We illustrate its application in evaluating data quality using completeness—a data quality dimension that is acknowledged as important. A decision-support tool (IPView) for managing data quality that incorporates the proposed framework is also described. D 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "01c267fbce494fcfabeabd38f18c19a3", "text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained", "title": "" }, { "docid": "e875d4a88e73984e37f5ce9ffe543791", "text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.", "title": "" }, { "docid": "829eafadf393a66308db452eeef617d5", "text": "The goal of creating non-biological intelligence has been with us for a long time, predating the nominal 1956 establishment of the field of artificial intelligence by centuries or, under some definitions, even by millennia. For much of this history it was reasonable to recast the goal of “creating” intelligence as that of “designing” intelligence. For example, it would have been reasonable in the 17th century, as Leibnitz was writing about reasoning as a form of calculation, to think that the process of creating artificial intelligence would have to be something like the process of creating a waterwheel or a pocket watch: first understand the principles, then use human intelligence to devise a design based on the principles, and finally build a system in accordance with the design. At the dawn of the 19th century William Paley made such assumptions explicit, arguing that intelligent designers are necessary for the production of complex adaptive systems. And then, of course, Paley was soundly refuted by Charles Darwin in 1859. Darwin showed how complex and adaptive systems can arise naturally from a process of selection acting on random variation. That is, he showed that complex and adaptive design could be created without an intelligent designer. On the basis of evidence from paleontology, molecular biology, and evolutionary theory we now understand that nearly all of the interesting features of biological agents, including intelligence, have arisen through roughly Darwinian evolutionary processes (with a few important refinements, some of which are mentioned below). But there are still some holdouts for the pre-Darwinian view. A recent survey in the United States found that 42% of respondents expressed a belief that “Life on Earth has existed in its present form since the beginning of time” [7], and these views are supported by powerful political forces including a stridently anti-science President. These shocking political realities are, however, beyond the scope of the present essay. This essay addresses a more subtle form of pre-Darwinian thinking that occurs even among the scientifically literate, and indeed even among highly trained scientists conducting advanced AI research. Those who engage in this form of pre-Darwinian thinking accept the evidence for the evolution of terrestrial life but ignore or even explicitly deny the power of evolutionary processes to produce adaptive complexity in other contexts. Within the artificial intelligence research community those who engage in this form of thinking ignore or deny the power of evolutionary processes to create machine intelligence. Before exploring this complaint further it is worth asking whether an evolved artificial intelligence would even serve the broader goals of AI as a field. Every AI text opens by defining the field, and some of the proffered definitions are explicitly oriented toward design—presumably design by intelligent humans. For example Dean et al. define AI as “the design and study of computer programs that behave intelligently” [2, p. 1]. Would the field, so defined, be served by the demonstration of an evolved artificial intelligence? It would insofar as we could study the evolved system and particularly if we could use our resulting understanding as the basis for future designs. So even the most design-oriented AI researchers should be interested in evolved artificial intelligence if it can in fact be created.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "97b578720957155514ca9fbe68c03eed", "text": "Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.", "title": "" }, { "docid": "52c1300a818340065ca16d02343f13fe", "text": "Article history: Received 9 September 2014 Received in revised form 25 January 2015 Accepted 9 February 2015 Available online xxxx", "title": "" }, { "docid": "419499ced8902a00909c32db352ea7f5", "text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.", "title": "" }, { "docid": "186d9fc899fdd92c7e74615a2a054a03", "text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" } ]
scidocsrr
e1640b20b57f2db83b41db76947416dc
Data Mining in the Dark : Darknet Intelligence Automation
[ { "docid": "22bdd2c36ef72da312eb992b17302fbe", "text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.", "title": "" }, { "docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca", "text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.", "title": "" } ]
[ { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "bc6877a5a83531a794ac1c8f7a4c7362", "text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.", "title": "" }, { "docid": "a33486dfec199cd51e885d6163082a96", "text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.", "title": "" }, { "docid": "7394f3000da8af0d4a2b33fed4f05264", "text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.", "title": "" }, { "docid": "2216f853543186e73b1149bb5a0de297", "text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.", "title": "" }, { "docid": "501f9cb511e820c881c389171487f0b4", "text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.", "title": "" }, { "docid": "58d19a5460ce1f830f7a5e2cb1c5ebca", "text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.", "title": "" }, { "docid": "54bdabea83e86d21213801c990c60f4d", "text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.", "title": "" }, { "docid": "b5babae9b9bcae4f87f5fe02459936de", "text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.", "title": "" }, { "docid": "19b8acf4e5c68842a02e3250c346d09b", "text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.", "title": "" }, { "docid": "fe903498e0c3345d7e5ebc8bf3407c2f", "text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.", "title": "" }, { "docid": "07a6de40826f4c5bab4a8b8c51aba080", "text": "Prior studies on alternative work schedules have focused primarily on the main effects of compressed work weeks and shift work on individual outcomes. This study explores the combined effects of alternative and preferred work schedules on nurses' satisfaction with their work schedules, perceived patient care quality, and interferences with their personal lives.", "title": "" }, { "docid": "62ff5888ad0c8065097603da8ff79cd6", "text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.", "title": "" }, { "docid": "3910a3317ea9ff4ea6c621e562b1accc", "text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.", "title": "" }, { "docid": "263c04402cfe80649b1d3f4a8578e99b", "text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.", "title": "" }, { "docid": "06755f8680ee8b43e0b3d512b4435de4", "text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.", "title": "" }, { "docid": "cc9f566eb8ef891d76c1c4eee7e22d47", "text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments", "title": "" }, { "docid": "3b5340113d583b138834119614046151", "text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.", "title": "" }, { "docid": "3ea021309fd2e729ffced7657e3a6038", "text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.", "title": "" }, { "docid": "637e73416c1a6412eeeae63e1c73c2c3", "text": "Disgust, an emotion related to avoiding harmful substances, has been linked to moral judgments in many behavioral studies. However, the fact that participants report feelings of disgust when thinking about feces and a heinous crime does not necessarily indicate that the same mechanisms mediate these reactions. Humans might instead have separate neural and physiological systems guiding aversive behaviors and judgments across different domains. The present interdisciplinary study used functional magnetic resonance imaging (n = 50) and behavioral assessment to investigate the biological homology of pathogen-related and moral disgust. We provide evidence that pathogen-related and sociomoral acts entrain many common as well as unique brain networks. We also investigated whether morality itself is composed of distinct neural and behavioral subdomains. We provide evidence that, despite their tendency to elicit similar ratings of moral wrongness, incestuous and nonsexual immoral acts entrain dramatically separate, while still overlapping, brain networks. These results (i) provide support for the view that the biological response of disgust is intimately tied to immorality, (ii) demonstrate that there are at least three separate domains of disgust, and (iii) suggest strongly that morality, like disgust, is not a unified psychological or neurological phenomenon.", "title": "" } ]
scidocsrr
02ce80dc277237d28e5b16de1f8a14d3
Mobile-D: an agile approach for mobile application development
[ { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" } ]
[ { "docid": "3f06fc0b50a1de5efd7682b4ae9f5a46", "text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.", "title": "" }, { "docid": "74972989924aef7d8923d3297d221e23", "text": "Emerging evidence suggests that a traumatic brain injury (TBI) in childhood may disrupt the ability to abstract the central meaning or gist-based memory from connected language (discourse). The current study adopts a novel approach to elucidate the role of immediate and working memory processes in producing a cohesive and coherent gist-based text in the form of a summary in children with mild and severe TBI as compared to typically developing children, ages 8-14 years at test. Both TBI groups showed decreased performance on a summary production task as well as retrieval of specific content from a long narrative. Working memory on n-back tasks was also impaired in children with severe TBI, whereas immediate memory performance for recall of a simple word list in both TBI groups was comparable to controls. Interestingly, working memory, but not simple immediate memory for a word list, was significantly correlated with summarization ability and ability to recall discourse content.", "title": "" }, { "docid": "54df0e1a435d673053f9264a4c58e602", "text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.", "title": "" }, { "docid": "919d86270951a89a14398ee796b4e542", "text": "The role of the circadian clock in skin and the identity of genes participating in its chronobiology remain largely unknown, leading us to define the circadian transcriptome of mouse skin at two different stages of the hair cycle, telogen and anagen. The circadian transcriptomes of telogen and anagen skin are largely distinct, with the former dominated by genes involved in cell proliferation and metabolism. The expression of many metabolic genes is antiphasic to cell cycle-related genes, the former peaking during the day and the latter at night. Consistently, accumulation of reactive oxygen species, a byproduct of oxidative phosphorylation, and S-phase are antiphasic to each other in telogen skin. Furthermore, the circadian variation in S-phase is controlled by BMAL1 intrinsic to keratinocytes, because keratinocyte-specific deletion of Bmal1 obliterates time-of-day-dependent synchronicity of cell division in the epidermis leading to a constitutively elevated cell proliferation. In agreement with higher cellular susceptibility to UV-induced DNA damage during S-phase, we found that mice are most sensitive to UVB-induced DNA damage in the epidermis at night. Because in the human epidermis maximum numbers of keratinocytes go through S-phase in the late afternoon, we speculate that in humans the circadian clock imposes regulation of epidermal cell proliferation so that skin is at a particularly vulnerable stage during times of maximum UV exposure, thus contributing to the high incidence of human skin cancers.", "title": "" }, { "docid": "0cfac94bf56f39386802571ecd45cd3b", "text": "Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.", "title": "" }, { "docid": "76b081d26dc339218652cd6d7e0dfe4c", "text": "Software developers working on change tasks commonly experience a broad range of emotions, ranging from happiness all the way to frustration and anger. Research, primarily in psychology, has shown that for certain kinds of tasks, emotions correlate with progress and that biometric measures, such as electro-dermal activity and electroencephalography data, might be used to distinguish between emotions. In our research, we are building on this work and investigate developers' emotions, progress and the use of biometric measures to classify them in the context of software change tasks. We conducted a lab study with 17 participants working on two change tasks each. Participants were wearing three biometric sensors and had to periodically assess their emotions and progress. The results show that the wide range of emotions experienced by developers is correlated with their perceived progress on the change tasks. Our analysis also shows that we can build a classifier to distinguish between positive and negative emotions in 71.36% and between low and high progress in 67.70% of all cases. These results open up opportunities for improving a developer's productivity. For instance, one could use such a classifier for providing recommendations at opportune moments when a developer is stuck and making no progress.", "title": "" }, { "docid": "abd026e3f71c7e2a2b8d4fc8900b800f", "text": "Text Summarization aims to generate concise and compressed form of original documents. The techniques used for text summarization may be categorized as extractive summarization and abstractive summarization. We consider extractive techniques which are based on selection of important sentences within a document. A major issue in extractive summarization is how to select important sentences, i.e., what criteria should be defined for selection of sentences which are eventually part of the summary. We examine this issue using rough sets notion of reducts. A reduct is an attribute subset which essentially contains the same information as the original attribute set. In particular, we defined and examined three types of matrices based on an information table, namely, discernibility matrix, indiscernibility matrix and equal to one matrix. Each of these matrices represents a certain type of relationship between the objects of an information table. Three types of reducts are determined based on these matrices. The reducts are used to select sentences and consequently generate text summaries. Experimental results and comparisons with existing approaches advocates for the use of the proposed approach in generating text summaries.", "title": "" }, { "docid": "7cf8e2555cfccc1fc091272559ad78d7", "text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.", "title": "" }, { "docid": "bf2f9a0387de2b2aa3136a2879a07e83", "text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.", "title": "" }, { "docid": "25c80c2fe20576ca6f94d5abac795521", "text": "BACKGROUND\nIntelligence theory research has illustrated that people hold either \"fixed\" (intelligence is immutable) or \"growth\" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the \"fixed\" or \"growth\" mindset and whether individual mindset affects perception of medical error reporting. \n\n\nMETHODS\nWe sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the \"Theories of Intelligence Inventory\" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others.\n\n\nRESULTS\nWe received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as \"fixed\" and 86 (51 %) as \"growth\". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors.\n\n\nCONCLUSIONS\nThere is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.", "title": "" }, { "docid": "082a077db6f8b0d41c613f9a50934239", "text": "Traceability is recognized to be important for supporting agile development processes. However, after analyzing many of the existing traceability approaches it can be concluded that they strongly depend on traditional development process characteristics. Within this paper it is justified that this is a drawback to support adequately agile processes. As it is discussed, some concepts do not have the same semantics for traditional and agile methodologies. This paper proposes three features that traceability models should support to be less dependent on a specific development process: (1) user-definable traceability links, (2) roles, and (3) linkage rules. To present how these features can be applied, an emerging traceability metamodel (TmM) will be used within this paper. TmM supports the definition of traceability methodologies adapted to the needs of each project. As it is shown, after introducing these three features into traceability models, two main advantages are obtained: 1) the support they can provide to agile process stakeholders is significantly more extensive, and 2) it will be possible to achieve a higher degree of automation. In this sense it will be feasible to have a methodical trace acquisition and maintenance process adapted to agile processes.", "title": "" }, { "docid": "2d7963a209ec1c7f38c206a0945a1a7e", "text": "We present a system which enables a user to remove a le from both the le system and all the backup tapes on which the le is stored. The ability to remove les from all backup tapes is desirable in many cases. Our system erases information from the backup tape without actually writing on the tape. This is achieved by applying cryptography in a new way: a block cipher is used to enable the system to \\forget\" information rather than protect it. Our system is easy to install and is transparent to the end user. Further, it introduces no slowdown in system performance and little slowdown in the backup procedure.", "title": "" }, { "docid": "de8045598fe808788aca455eee4a1126", "text": "This paper presents an efficient and practical approach for automatic, unsupervised object detection and segmentation in two-texture images based on the concept of Gabor filter optimization. The entire process occurs within a hierarchical framework and consists of the steps of detection, coarse segmentation, and fine segmentation. In the object detection step, the image is first processed using a Gabor filter bank. Then, the histograms of the filtered responses are analyzed using the scale-space approach to predict the presence/absence of an object in the target image. If the presence of an object is reported, the proposed approach proceeds to the coarse segmentation stage, wherein the best Gabor filter (among the bank of filters) is automatically chosen, and used to segment the image into two distinct regions. Finally, in the fine segmentation step, the coefficients of the best Gabor filter (output from the previous stage) are iteratively refined in order to further fine-tune and improve the segmentation map produced by the coarse segmentation step. In the validation study, the proposed approach is applied as part of a machine vision scheme with the goal of quantifying the stain-release property of fabrics. To that end, the presented hierarchical scheme is used to detect and segment stains on a sizeable set of digitized fabric images, and the performance evaluation of the detection, coarse segmentation, and fine segmentation steps is conducted using appropriate metrics. The promising nature of these results bears testimony to the efficacy of the proposed approach.", "title": "" }, { "docid": "72d75ebfc728d3b287bcaf429a6b2ee5", "text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.", "title": "" }, { "docid": "83637dc7109acc342d50366f498c141a", "text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.", "title": "" }, { "docid": "deedf390faeef304bf0479a844297113", "text": "A compact 24-GHz Doppler radar module is developed in this paper for non-contact human vital-sign detection. The 24-GHz radar transceiver chip, transmitting and receiving antennas, baseband circuits, microcontroller, and Bluetooth transmission module have been integrated and implemented on a printed circuit board. For a measurement range of 1.5 m, the developed radar module can successfully detect the respiration and heartbeat of a human adult.", "title": "" }, { "docid": "f15a7d48f3c42ccc97480204dc5c8622", "text": "We have developed a wearable upper limb support system (ULSS) for support during heavy overhead tasks. The purpose of this study is to develop the voluntary motion support algorithm for the ULSS, and to confirm the effectiveness of the ULSS with the developed algorithm through dynamic evaluation experiments. The algorithm estimates the motor intention of the wearer based on a bioelectrical signal (BES). The ULSS measures the BES via electrodes attached onto the triceps brachii, deltoid, and clavicle. The BES changes in synchronization with the motion of the wearer's upper limbs. The algorithm changes a control phase by comparing the BES and threshold values. The algorithm achieves voluntary motion support for dynamic tasks by changing support torques of the ULSS in synchronization with the control phase. Five healthy adult males moved heavy loads vertically overhead in the evaluation experiments. In a random instruction experiment, the volunteers moved in synchronization with random instructions, and we confirmed that the control phase changes in synchronization with the random instructions. In a motion support experiment, we confirmed that the average number of the vertical motion with the ULSS increased 2.3 times compared to the average number without the ULSS. As a result, the ULSS with the algorithm supports the motion voluntarily, and it has a positive effect on the support. In conclusion, we could develop the novel voluntary motion support algorithm of the ULSS.", "title": "" }, { "docid": "b2470ecd83971aa877d8a38a5b88a6dc", "text": "In this paper, we improve the attention or alignment accuracy of neural machine translation by utilizing the alignments of training sentence pairs. We simply compute the distance between the machine attentions and the “true” alignments, and minimize this cost in the training procedure. Our experiments on large-scale Chinese-to-English task show that our model improves both translation and alignment qualities significantly over the large-vocabulary neural machine translation system, and even beats a state-of-the-art traditional syntax-based system.", "title": "" }, { "docid": "e9b3ddc114998e25932819e3281e2e0c", "text": "We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.", "title": "" }, { "docid": "bffbc725b52468b41c53b156f6eadedb", "text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.", "title": "" } ]
scidocsrr
de58318e961209968774fcda1d76bc73
Forecasting of ozone concentration in smart city using deep learning
[ { "docid": "961348dd7afbc1802d179256606bdbb8", "text": "Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics. The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios. The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "4e9b1776436950ed25353a8731eda76a", "text": "This paper presents the design and implementation of VibeBin, a low-cost, non-intrusive and easy-to-install waste bin level detection system. Recent popularity of Internet-of-Things (IoT) sensors has brought us unprecedented opportunities to enable a variety of new services for monitoring and controlling smart buildings. Indoor waste management is crucial to a healthy environment in smart buildings. Measuring the waste bin fill-level helps building operators schedule garbage collection more responsively and optimize the quantity and location of waste bins. Existing systems focus on directly and intrusively measuring the physical quantities of the garbage (weight, height, volume, etc.) or its appearance (image), and therefore require careful installation, laborious calibration or labeling, and can be costly. Our system indirectly measures fill-level by sensing the changes in motor-induced vibration characteristics on the outside surface of waste bins. VibeBin exploits the physical nature of vibration resonance of the waste bin and the garbage within, and learns the vibration features of different fill-levels through a few garbage collection (emptying) cycles in a completely unsupervised manner. VibeBin identifies vibration features of different fill-levels by clustering historical vibration samples based on a custom distance metric which measures the dissimilarity between two samples. We deploy our system on eight waste bins of different types and sizes, and show that under normal usage and real waste, it can deliver accurate level measurements after just 3 garbage collection cycles. The average F-score (harmonic mean of precision and recall) of measuring empty, half, and full levels achieves 0.912. A two-week deployment also shows that the false positive and false negative events are satisfactorily rare.", "title": "" }, { "docid": "91a56dbdefc08d28ff74883ec10a5d6e", "text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.", "title": "" }, { "docid": "1c94dec13517bedf7a8140e207e0a6d9", "text": "Art and anatomy were particularly closely intertwined during the Renaissance period and numerous painters and sculptors expressed themselves in both fields. Among them was Michelangelo Buonarroti (1475-1564), who is renowned for having produced some of the most famous of all works of art, the frescoes on the ceiling and on the wall behind the altar of the Sistine Chapel in Rome. Recently, a unique association was discovered between one of Michelangelo's most celebrated works (The Creation of Adam fresco) and the Divine Proportion/Golden Ratio (GR) (1.6). The GR can be found not only in natural phenomena but also in a variety of human-made objects and works of art. Here, using Image-Pro Plus 6.0 software, we present mathematical evidence that Michelangelo also used the GR when he painted Saint Bartholomew in the fresco of The Last Judgment, which is on the wall behind the altar. This discovery will add a new dimension to understanding the great works of Michelangelo Buonarroti.", "title": "" }, { "docid": "a1f93bedbddefb63cd7ab7d030b4f3ee", "text": "This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.", "title": "" }, { "docid": "ddb66de70b76427f30fae713f176bc64", "text": "Identifying whether an utterance is a statement, question, greeting, and so forth is integral to effective automatic understanding of natural dialog. Little is known, however, about how such dialog acts (DAs) can be automatically classified in truly natural conversation. This study asks whether current approaches, which use mainly word information, could be improved by adding prosodic information. The study is based on more than 1000 conversations from the Switchboard corpus. DAs were hand-annotated, and prosodic features (duration, pause, F0, energy, and speaking rate) were automatically extracted for each DA. In training, decision trees based on these features were inferred; trees were then applied to unseen test data to evaluate performance. Performance was evaluated for prosody models alone, and after combining the prosody models with word information--either from true words or from the output of an automatic speech recognizer. For an overall classification task, as well as three subtasks, prosody made significant contributions to classification. Feature-specific analyses further revealed that although canonical features (such as F0 for questions) were important, less obvious features could compensate if canonical features were removed. Finally, in each task, integrating the prosodic model with a DA-specific statistical language model improved performance over that of the language model alone, especially for the case of recognized words. Results suggest that DAs are redundantly marked in natural conversation, and that a variety of automatically extractable prosodic features could aid dialog processing in speech applications.", "title": "" }, { "docid": "a774567d957ed0ea209b470b8eced563", "text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.", "title": "" }, { "docid": "d8253659de704969cd9c30b3ea7543c5", "text": "Frequent itemset mining is an important step of association rules mining. Traditional frequent itemset mining algorithms have certain limitations. For example Apriori algorithm has to scan the input data repeatedly, which leads to high I/O load and low performance, and the FP-Growth algorithm is limited by the capacity of computer's inner stores because it needs to build a FP-tree and mine frequent itemset on the basis of the FP-tree in memory. With the coming of the Big Data era, these limitations are becoming more prominent when confronted with mining large-scale data. In this paper, DPBM, a distributed matrix-based pruning algorithm based on Spark, is proposed to deal with frequent itemset mining. DPBM can greatly reduce the amount of candidate itemset by introducing a novel pruning technique for matrix-based frequent itemset mining algorithm, an improved Apriori algorithm which only needs to scan the input data once. In addition, each computer node reduces greatly the memory usage by implementing DPBM under a latest distributed environment-Spark, which is a lightning-fast distributed computing. The experimental results show that DPBM have better performance than MapReduce-based algorithms for frequent itemset mining in terms of speed and scalability.", "title": "" }, { "docid": "d8c64128c89f3a291b410eefbf00dab2", "text": "We review the prospects of using yeasts and microalgae as sources of cheap oils that could be used for biodiesel. We conclude that yeast oils, the cheapest of the oils producible by heterotrophic microorganisms, are too expensive to be viable alternatives to the major commodity plant oils. Algal oils are similarly unlikely to be economic; the cheapest form of cultivation is in open ponds which then requires a robust, fast-growing alga that can withstand adventitious predatory protozoa or contaminating bacteria and, at the same time, attain an oil content of at least 40% of the biomass. No such alga has yet been identified. However, we note that if the prices of the major plant oils and crude oil continue to rise in the future, as they have done over the past 12 months, then algal lipids might just become a realistic alternative within the next 10 to 15 years. Better prospects would, however, be to focus on algae as sources of polyunsaturated fatty acids.", "title": "" }, { "docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c", "text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.", "title": "" }, { "docid": "4a4a868d64a653fac864b5a7a531f404", "text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.", "title": "" }, { "docid": "2d78a4c914c844a3f28e8f3b9f65339f", "text": "The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers.", "title": "" }, { "docid": "ce9345c367db70de1dec07cad0343f71", "text": "Techniques for digital image tampering are becoming widespread for the availability of low cost technology in which the image could be easily manipulated. Copy-move forgery is one of the tampering techniques that are frequently used and has recently received significant attention. But the existing methods, including block-matching and key point matching based methods, are not able to be used to solve the problem of detecting image forgery in both flat region and non-flat region. In this paper, combining the thinking of these two types of methods, we develop a SURF-based method to tackle this problem. In addition to the determination of forgeries in non-flat region through key point features, our method can be used to detect flat region in images in an effective way, and extract FMT features after blocking the region. By using matching algorithms of similar blocked images, image forgeries in flat region can be determined, which results in the completing of the entire image tamper detection. Experimental results are presented to demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "ffe6edef11daef1db0c4aac77bed7a23", "text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.", "title": "" }, { "docid": "02ad9bef7d38af14c01ceb6efec8078b", "text": "Weakness of the will may lead to ineffective goal striving in the sense that people lacking willpower fail to get started, to stay on track, to select instrumental means, and to act efficiently. However, using a simple self-regulation strategy (i.e., forming implementation intentions or making if–then plans) can get around this problem by drastically improving goal striving on the spot. After an overview of research investigating how implementation intentions work, I will discuss how people can use implementation intentions to overcome potential hindrances to successful goal attainment. Extensive empirical research shows that implementation intentions help people to meet their goals no matter whether these hindrances originate from within (e.g., lack of cognitive capabilities) or outside the person (i.e., difficult social situations). Moreover, I will report recent research demonstrating that implementation intentions can even be used to control impulsive cognitive, affective, and behavioral responses that interfere with one’s focal goal striving. In ending, I will present various new lines of implementation intention research, and raise a host of open questions that still deserve further empirical and theoretical analysis.", "title": "" }, { "docid": "aa70864ca9d2285eebe5b46f7c283ebe", "text": "The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits fine-grain parallelism by executing multiple, possibly (control and/or data) dependent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocessing paradigms, and shares a number of properties of the sequential processing model and the dataflow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multiscalar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The multiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an efficient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths.", "title": "" }, { "docid": "000652922defcc1d500a604d43c8f77b", "text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.", "title": "" }, { "docid": "6162ad3612b885add014bd09baa5f07a", "text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.", "title": "" }, { "docid": "29d1502c7edea13ce67aa1e283dc8488", "text": "An explosive growth in the volume, velocity, and variety of the data available on the Internet has been witnessed recently. The data originated frommultiple types of sources including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, health data has led to one of the most challenging research issues of the big data era. In this paper, Knowle—an online news management system upon semantic link network model is introduced. Knowle is a news event centrality data management system. The core elements of Knowle are news events on the Web, which are linked by their semantic relations. Knowle is a hierarchical data system, which has three different layers including the bottom layer (concepts), the middle layer (resources), and the top layer (events). The basic blocks of the Knowle system—news collection, resources representation, semantic relations mining, semantic linking news events are given. Knowle does not require data providers to follow semantic standards such as RDF or OWL, which is a semantics-rich self-organized network. It reflects various semantic relations of concepts, news, and events. Moreover, in the case study, Knowle is used for organizing andmining health news, which shows the potential on forming the basis of designing and developing big data analytics based innovation framework in the health domain. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b16407fc67058110b334b047bcfea9ac", "text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@", "title": "" }, { "docid": "c684de3eb8a370e3444aee3a37319b46", "text": "We present an extended version of our work on the design and implementation of a reference model of the human body, the Master Motor Map (MMM) which should serve as a unifying framework for capturing human motions, their representation in standard data structures and formats as well as their reproduction on humanoid robots. The MMM combines the definition of a comprehensive kinematics and dynamics model of the human body with 104 DoF including hands and feet with procedures and tools for unified capturing of human motions. We present online motion converters for the mapping of human and object motions to the MMM model while taking into account subject specific anthropométrie data as well as for the mapping of MMM motion to a target robot kinematics. Experimental evaluation of the approach performed on VICON motion recordings demonstrate the benefits of the MMM as an important step towards standardized human motion representation and mapping to humanoid robots.", "title": "" } ]
scidocsrr
e270440b45d2810de5d62df97acdea83
Subjective and Objective Quality-of-Experience of Adaptive Video Streaming
[ { "docid": "a4f3bb1e91fb996858ff438487476217", "text": "Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission, and reproduction. For example, lossy video compression techniques, which are almost always used to reduce the bandwidth needed to store or transmit video data, may degrade the quality during the quantization process. For another instance, the digital video bitstreams delivered over error-prone channels, such as wireless channels, may be received imperfectly due to the impairment occurred during transmission. Package-switched communication networks, such as the Internet, can cause loss or severe delay of received data packages, depending on the network conditions and the quality of services. All these transmission errors may result in distortions in the received video data. It is therefore imperative for a video service system to be able to realize and quantify the video quality degradations that occur in the system, so that it can maintain, control and possibly enhance the quality of the video data. An effective image and video quality metric is crucial for this purpose.", "title": "" } ]
[ { "docid": "9ce3f1a67d23425e3920670ac5a1f9b4", "text": "We examine the limits of consistency in highly available and fault-tolerant distributed storage systems. We introduce a new property—convergence—to explore the these limits in a useful manner. Like consistency and availability, convergence formalizes a fundamental requirement of a storage system: writes by one correct node must eventually become observable to other connected correct nodes. Using convergence as our driving force, we make two additional contributions. First, we close the gap between what is known to be impossible (i.e. the consistency, availability, and partition-tolerance theorem) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in an asynchronous system, we show that natural causal consistency, a strengthening of causal consistency that respects the real-time ordering of operations, provides a tight bound on consistency semantics that can be enforced without compromising availability and convergence. In an asynchronous system with Byzantine-failures, we show that it is impossible to implement many of the recently introduced forking-based consistency semantics without sacrificing either availability or convergence. Finally, we show that it is not necessary to compromise availability or convergence by showing that there exist practically useful semantics that are enforceable by available, convergent, and Byzantine-fault tolerant systems.", "title": "" }, { "docid": "ad868d09ec203c2080e0f8458daccf91", "text": "We present empirical measurements of the packet delivery performance of the latest sensor platforms: Micaz and Telos motes. In this article, we present observations that have implications to a set of common assumptions protocol designers make while designing sensornet protocols—specifically—the MAC and network layer protocols. We first distill these common assumptions in to a conceptual model and show how our observations support or dispute these assumptions. We also present case studies of protocols that do not make these assumptions. Understanding the implications of these observations to the conceptual model can improve future protocol designs.", "title": "" }, { "docid": "f8330ca9f2f4c05c26d679906f65de04", "text": "In recent years, VDSL2 standard has been gaining popularity as a high speed network access technology to deliver triple play services of video, voice and data. These services require strict quality-of-experience (QoE) and quality-of-services (QoS) on DSL systems operating in an impulse noise environment. The DSL systems, in-turn, are affected severely in the presence of impulse noise in the telephone line. Therefore to improve upon the requirements of IPTV under the impulse noise conditions the standard body has been evaluating various proposals to mitigate and reduce the error rates. This paper lists and qualitatively compares various initiatives that have been suggested in the VDSL2 standard body to improve the protection of VDSL2 services against impulse noise.", "title": "" }, { "docid": "c6c4edf88c38275e82aa73a11ef3a006", "text": "In this paper, we propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the legitimate power of algorithms to direct human action and to impact which information is considered true. We use this concept to examine the culture of users of Bit coin, a crypto-currency and payment platform. Through Bit coin, we explore what it means to trust in algorithms. Our study utilizes interview and survey data. We found that Bit coin users prefer algorithmic authority to the authority of conventional institutions, which they see as untrustworthy. However, we argue that Bit coin users do not have blind faith in algorithms, rather, they acknowledge the need for mediating algorithmic authority with human judgment. We examine the tension between members of the Bit coin community who would prefer to integrate Bit coin with existing institutions and those who would prefer to resist integration.", "title": "" }, { "docid": "72eceddfa08e73739022df7c0dc89a3a", "text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique", "title": "" }, { "docid": "535934dc80c666e0d10651f024560d12", "text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …", "title": "" }, { "docid": "76d22feb7da3dbc14688b0d999631169", "text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.", "title": "" }, { "docid": "61615f5aefb0aa6de2dd1ab207a966d5", "text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.", "title": "" }, { "docid": "b6dbccc6b04c282ca366eddea77d0107", "text": "Current methods for annotating and interpreting human genetic variation tend to exploit a single information type (for example, conservation) and/or are restricted in scope (for example, to missense changes). Here we describe Combined Annotation–Dependent Depletion (CADD), a method for objectively integrating many diverse annotations into a single measure (C score) for each variant. We implement CADD as a support vector machine trained to differentiate 14.7 million high-frequency human-derived alleles from 14.7 million simulated variants. We precompute C scores for all 8.6 billion possible human single-nucleotide variants and enable scoring of short insertions-deletions. C scores correlate with allelic diversity, annotations of functionality, pathogenicity, disease severity, experimentally measured regulatory effects and complex trait associations, and they highly rank known pathogenic variants within individual genomes. The ability of CADD to prioritize functional, deleterious and pathogenic variants across many functional categories, effect sizes and genetic architectures is unmatched by any current single-annotation method.", "title": "" }, { "docid": "424239765383edd8079d90f63b3fde1d", "text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "bb49674d0a1f36e318d27525b693e51d", "text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.", "title": "" }, { "docid": "b847446c0babb9e8ebb8e8d4c50a7023", "text": "This paper introduces a general technique, called LABurst, for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.", "title": "" }, { "docid": "d3214d24911a5e42855fd1a53516d30b", "text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola mjones@merl.com viola@microsoft.com Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "c726dc2218fa4d286aa10d827b427871", "text": "Acquisition of the intestinal microbiota begins at birth, and a stable microbial community develops from a succession of key organisms. Disruption of the microbiota during maturation by low-dose antibiotic exposure can alter host metabolism and adiposity. We now show that low-dose penicillin (LDP), delivered from birth, induces metabolic alterations and affects ileal expression of genes involved in immunity. LDP that is limited to early life transiently perturbs the microbiota, which is sufficient to induce sustained effects on body composition, indicating that microbiota interactions in infancy may be critical determinants of long-term host metabolic effects. In addition, LDP enhances the effect of high-fat diet induced obesity. The growth promotion phenotype is transferrable to germ-free hosts by LDP-selected microbiota, showing that the altered microbiota, not antibiotics per se, play a causal role. These studies characterize important variables in early-life microbe-host metabolic interaction and identify several taxa consistently linked with metabolic alterations. PAPERCLIP:", "title": "" }, { "docid": "a4b123705dda7ae3ac7e9e88a50bd64a", "text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "title": "" }, { "docid": "9b7ff8a7dec29de5334f3de8d1a70cc3", "text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.", "title": "" }, { "docid": "178ba744f5e9df6c5a7a704949ad8ac1", "text": "This software paper describes ‘Stylometry with R’ (stylo), a flexible R package for the highlevel analysis of writing style in stylometry. Stylometry (computational stylistics) is concerned with the quantitative study of writing style, e.g. authorship verification, an application which has considerable potential in forensic contexts, as well as historical research. In this paper we introduce the possibilities of stylo for computational text analysis, via a number of dummy case studies from English and French literature. We demonstrate how the package is particularly useful in the exploratory statistical analysis of texts, e.g. with respect to authorial writing style. Because stylo provides an attractive graphical user interface for high-level exploratory analyses, it is especially suited for an audience of novices, without programming skills (e.g. from the Digital Humanities). More experienced users can benefit from our implementation of a series of standard pipelines for text processing, as well as a number of similarity metrics.", "title": "" }, { "docid": "81bbacc372c1f67e218895bcb046651d", "text": "Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extraction, which could hinder their generalization performance. Additionally, existing methods are undermined for unsupervised and incremental learning tasks. Recently, the recent advancement of deep learning makes it possible to perform automatic high-level feature extraction thus achieves promising performance in many areas. Since then, deep learning based methods have been widely adopted for the sensor-based activity recognition tasks. This paper surveys the recent advance of deep learning based sensor-based activity recognition. We summarize existing literature from three aspects: sensor modality, deep model, and application. We also present detailed insights on existing work and propose grand challenges for future research.", "title": "" }, { "docid": "e658507a3ed6c52d27c5db618f9fa8cb", "text": "Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.", "title": "" } ]
scidocsrr
7fbe1e066bf607663234d89602f0666e
A multi-case study on Industry 4.0 for SME's in Brandenburg, Germany
[ { "docid": "1857eb0d2d592961bd7c1c2f226df616", "text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.", "title": "" } ]
[ { "docid": "7ddc7a3fffc582f7eee1d0c29914ba1a", "text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.", "title": "" }, { "docid": "d94f4df63ac621d9a8dec1c22b720abb", "text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.", "title": "" }, { "docid": "95bb07e57d9bd2b7e9a9a59c29806b66", "text": "Breast cancer is one of the most common cancers and the second most responsible for cancer mortality worldwide. In 2014, in Portugal approximately 27,200 people died of cancer, of which 1,791 were women with breast cancer. Flaxseed has been one of the most studied foods, regarding possible relations to breast cancer, though mainly in experimental studies in animals, yet in few clinical trials. It is rich in omega-3 fatty acids, α-linolenic acid, lignan, and fibers. One of the main components of flaxseed is the lignans, of which 95% are made of the predominant secoisolariciresinol diglucoside (SDG). SDG is converted into enterolactone and enterodiol, both with antiestrogen activity and structurally similar to estrogen; they can bind to cell receptors, decreasing cell growth. Some studies have shown that the intake of omega-3 fatty acids is related to the reduction of breast cancer risk. In animal studies, α-linolenic acids have been shown to be able to suppress growth, size, and proliferation of cancer cells and also to promote breast cancer cell death. Other animal studies found that the intake of flaxseed combined with tamoxifen can reduce tumor size to a greater extent than taking tamoxifen alone. Additionally, some clinical trials showed that flaxseed can have an important role in decreasing breast cancer risk, mainly in postmenopausal women. Further studies are needed, specifically clinical trials that may demonstrate the potential benefits of flaxseed in breast cancer.", "title": "" }, { "docid": "c12d27988e70e9b3e6987ca2f0ca8bca", "text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.", "title": "" }, { "docid": "ea596b23af4b34fdb6a9986a03730d99", "text": "In the past few years, recommender systems and semantic web technologies have become main subjects of interest in the research community. In this paper, we present a domain independent semantic similarity measure that can be used in the recommendation process. This semantic similarity is based on the relations between the individuals of an ontology. The assessment can be done offline which allows time to be saved and then, get real-time recommendations. The measure has been experimented on two different domains: movies and research papers. Moreover, the generated recommendations by the semantic similarity have been evaluated by a set of volunteers and the results have been promising.", "title": "" }, { "docid": "0a981597279b2fb1792b5d1a00f0c9ec", "text": "With billions of people using smartphones and the exponential growth of smartphone apps, it is prohibitive for app marketplaces, such as Google App Store, to thoroughly verify if an app is legitimate or malicious. As a result, mobile users are left to decide for themselves whether an app is safe to use. Even worse, recent studies have shown that over 70% of apps in markets request to collect data irrelevant to the main functions of the apps, which could cause leaking of private information or inefficient use of mobile resources. It is worth mentioning that since resource management mechanism of mobile devices is different from PC machines, existing security solutions in PC malware area are not quite compatible with mobile devices. Therefore, academic researchers and commercial anti-malware companies have proposed many security mechanisms to address the security issues of the Android devices. Considering the mechanisms and techniques which are different in nature and used in proposed works, they can be classified into different categories. In this survey, we discuss the existing Android security threats and existing security enforcements solutions between 2010−2015 and try to classify works and review their functionalities. We review a few works of each class. The survey also reviews the strength and weak points of the solutions.", "title": "" }, { "docid": "5bfc5768cf41643a870e3f3dddbbd741", "text": "Homomorphic encryption has progressed rapidly in both efficiency and versatility since its emergence in 2009. Meanwhile, a multitude of pressing privacy needs — ranging from cloud computing to healthcare management to the handling of shared databases such as those containing genomics data — call for immediate solutions that apply fully homomorpic encryption (FHE) and somewhat homomorphic encryption (SHE) technologies. Further progress towards these ends requires new ideas for the efficient implementation of algebraic operations on word-based (as opposed to bit-wise) encrypted data. Whereas handling data encrypted at the bit level leads to prohibitively slow algorithms for the arithmetic operations that are essential for cloud computing, the word-based approach hits its bottleneck when operations such as integer comparison are needed. In this work, we tackle this challenging problem, proposing solutions to problems — including comparison and division — in word-based encryption via a leveled FHE scheme. We present concrete performance figures for all proposed primitives.", "title": "" }, { "docid": "ec5095df6250a8f6cdf088f730dfbd5e", "text": "Canine atopic dermatitis (CAD) is a multifaceted disease associated with exposure to various offending agents such as environmental and food allergens. The diagnosis of this condition is difficult because none of the typical signs are pathognomonic. Sets of criteria have been proposed but are mainly used to include dogs in clinical studies. The goals of the present study were to characterize the clinical features and signs of a large population of dogs with CAD, to identify which of these characteristics could be different in food-induced atopic dermatitis (FIAD) and non-food-induced atopic dermatitis (NFIAD) and to develop criteria for the diagnosis of this condition. Using simulated annealing, selected criteria were tested on a large and geographically widespread population of pruritic dogs. The study first described the signalment, history and clinical features of a large population of CAD dogs, compared FIAD and NFIAD dogs and confirmed that both conditions are clinically indistinguishable. Correlations of numerous clinical features with the diagnosis of CAD are subsequently calculated, and two sets of criteria associated with sensitivity and specificity ranging from 80% to 85% and from 79% to 85%, respectively, are proposed. It is finally demonstrated that these new sets of criteria provide better sensitivity and specificity, when compared to Willemse and Prélaud criteria. These criteria can be applied to both FIAD and NFIAD dogs.", "title": "" }, { "docid": "31add593ce5597c24666d9662b3db89d", "text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.", "title": "" }, { "docid": "ef6160d304908ea87287f2071dea5f6d", "text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.", "title": "" }, { "docid": "eb6675c6a37aa6839fa16fe5d5220cfb", "text": "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.", "title": "" }, { "docid": "bd1a13c94d0e12b4ba9f14fef47d2564", "text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "701be9375bb7c019710f7887a0074d15", "text": "A blockchain powered health information exchange (HIE) can unlock the true value of interoperability and cyber security. This system has the potential to eliminate the friction and costs of current third party intermediaries, when considering population health management. There are promises of improved data integrity, reduced transaction costs, decentralization and disintermediation of trust. Being able to coordinate patient care via a blockchain HIE essentially alleviates unnecessary services and duplicate tests with lowering costs and improvements in efficiencies of the continuum care cycle, while adhering to all HIPAA rules and standards. A patient-centered protocol supported by blockchain technology, Patientory is changing the way healthcare stakeholders manage electronic medical data and interact with clinical care teams.", "title": "" }, { "docid": "3647b5e0185c0120500fff8061265abd", "text": "Human and machine visual sensing is enhanced when surface properties of objects in scenes, including color, can be reliably estimated despite changes in the ambient lighting conditions. We describe a computational method for estimating surface spectral reflectance when the spectral power distribution of the ambient light is not known.", "title": "" }, { "docid": "dc42ffc3d9a5833f285bac114e8a8b37", "text": "In this paper, we present a recursive algorithm for extracting classification rules from feedforward neural networks (NNs) that have been trained on data sets having both discrete and continuous attributes. The novelty of this algorithm lies in the conditions of the extracted rules: the rule conditions involving discrete attributes are disjoint from those involving continuous attributes. The algorithm starts by first generating rules with discrete attributes only to explain the classification process of the NN. If the accuracy of a rule with only discrete attributes is not satisfactory, the algorithm refines this rule by recursively generating more rules with discrete attributes not already present in the rule condition, or by generating a hyperplane involving only the continuous attributes. We show that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.", "title": "" }, { "docid": "062839e72c6bdc6c6bf2ba1d1041d07b", "text": "Students’ increasing use of text messaging language has prompted concern that textisms (e.g., 2 for to, dont for don’t, ☺) will intrude into their formal written work. Eighty-six Australian and 150 Canadian undergraduates were asked to rate the appropriateness of textism use in various situations. Students distinguished between the appropriateness of using textisms in different writing modalities and to different recipients, rating textism use as inappropriate in formal exams and assignments, but appropriate in text messages, online chat and emails with friends and siblings. In a second study, we checked the examination papers of a separate sample of 153 Australian undergraduates for the presence of textisms. Only a negligible number were found. We conclude that, overall, university students recognise the different requirements of different recipients and modalities when considering textism use and that students are able to avoid textism use in exams despite media reports to the contrary.", "title": "" }, { "docid": "a458f16b84f40dc0906658a93d4b2efa", "text": "We investigated the usefulness of Sonazoid contrast-enhanced ultrasonography (Sonazoid-CEUS) in the diagnosis of hepatocellular carcinoma (HCC). The examination was performed by comparing the images during the Kupffer phase of Sonazoid-CEUS with superparamagnetic iron oxide magnetic resonance (SPIO-MRI). The subjects were 48 HCC nodules which were histologically diagnosed (well-differentiated HCC, n = 13; moderately differentiated HCC, n = 30; poorly differentiated HCC, n = 5). We performed Sonazoid-CEUS and SPIO-MRI on all subjects. In the Kupffer phase of Sonazoid-CEUS, the differences in the contrast agent uptake between the tumorous and non-tumorous areas were quantified as the Kupffer phase ratio and compared. In the SPIO-MRI, it was quantified as the SPIO-intensity index. We then compared these results with the histological differentiation of HCCs. The Kupffer phase ratio decreased as the HCCs became less differentiated (P < 0.0001; Kruskal–Wallis test). The SPIO-intensity index also decreased as HCCs became less differentiated (P < 0.0001). A positive correlation was found between the Kupffer phase ratio and the SPIO-MRI index (r = 0.839). In the Kupffer phase of Sonazoid-CEUS, all of the moderately and poorly differentiated HCCs appeared hypoechoic and were detected as a perfusion defect, whereas the majority (9 of 13 cases, 69.2%) of the well-differentiated HCCs had an isoechoic pattern. The Kupffer phase images of Sonazoid-CEUS and SPIO-MRI matched perfectly (100%) in all of the moderately and poorly differentiated HCCs. Sonazoid-CEUS is useful for estimating histological grading of HCCs. It is a modality that could potentially replace SPIO-MRI.", "title": "" }, { "docid": "9f1441bc10d7b0234a3736ce83d5c14b", "text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.", "title": "" } ]
scidocsrr
e982cf99edeaf681206fcf5daaff79f7
Lip reading using a dynamic feature of lip images and convolutional neural networks
[ { "docid": "d5c4e44514186fa1d82545a107e87c94", "text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.", "title": "" } ]
[ { "docid": "adb02577e7fba530c2406fbf53571d14", "text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.", "title": "" }, { "docid": "720a3d65af4905cbffe74ab21d21dd3f", "text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.", "title": "" }, { "docid": "e86ad4e9b61df587d9e9e96ab4eb3978", "text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.", "title": "" }, { "docid": "e85b5115a489835bc58a48eaa727447a", "text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.", "title": "" }, { "docid": "4eec5be6b29425e025f9e1b23b742639", "text": "There is increasing interest in sharing the experience of products and services on the web platform, and social media has opened a way for product and service providers to understand their consumers needs and expectations. This paper explores reviews by cloud consumers that reflect consumers experiences with cloud services. The reviews of around 6,000 cloud service users were analysed using sentiment analysis to identify the attitude of each review, and to determine whether the opinion expressed was positive, negative, or neutral. The analysis used two data mining tools, KNIME and RapidMiner, and the results were compared. We developed four prediction models in this study to predict the sentiment of users reviews. The proposed model is based on four supervised machine learning algorithms: K-Nearest Neighbour (k-NN), Nave Bayes, Random Tree, and Random Forest. The results show that the Random Forest predictions achieve 97.06% accuracy, which makes this model a better prediction model than the other three.", "title": "" }, { "docid": "b988525d515588da8becc18c2aa21e82", "text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.", "title": "" }, { "docid": "73d3f51bdb913749665674ae8aea3a41", "text": "Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users is the topic of this paper. Facial animation parameters (FAPs) defined according to the ISO MPEG-4 standard are extracted by a robust facial analysis system, accompanied by appropriate confidence measures of the estimation accuracy. A novel neurofuzzy system is then created, based on rules that have been defined through analysis of FAP variations both at the discrete emotional space, as well as in the 2D continuous activation-evaluation one. The neurofuzzy system allows for further learning and adaptation to specific users' facial expression characteristics, measured though FAP estimation in real life application of the system, using analysis by clustering of the obtained FAP values. Experimental studies with emotionally expressive datasets, generated in the EC IST ERMIS project indicate the good performance and potential of the developed technologies.", "title": "" }, { "docid": "d59c6a2dd4b6bf7229d71f3ae036328a", "text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.", "title": "" }, { "docid": "fc09e1c012016c75418ec33dfe5868d5", "text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large", "title": "" }, { "docid": "36787667e41db8d9c164e39a89f0c533", "text": "This paper presents an improvement of the well-known conventional three-phase diode bridge rectifier with dc output capacitor. The proposed circuit increases the power factor (PF) at the ac input and reduces the ripple current stress on the smoothing capacitor. The basic concept is the arrangement of an active voltage source between the output of the diode bridge and the smoothing capacitor which is controlled in a way that it emulates an ideal smoothing inductor. With this the input currents of the diode bridge which usually show high peak amplitudes are converted into a 120/spl deg/ rectangular shape which ideally results in a total PF of 0.955. The active voltage source mentioned before is realized by a low-voltage switch-mode converter stage of small power rating as compared to the output power of the rectifier. Starting with a brief discussion of basic three-phase rectifier techniques and of the drawbacks of three-phase diode bridge rectifiers with capacitive smoothing, the concept of the proposed active smoothing is described and the stationary operation is analyzed. Furthermore, control concepts as well as design considerations and analyses of the dynamic systems behavior are given. Finally, measurements taken from a laboratory model are presented.", "title": "" }, { "docid": "1d1cec012f9f78b40a0931ae5dea53d0", "text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR", "title": "" }, { "docid": "c24bd4156e65d57eda0add458304988c", "text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.", "title": "" }, { "docid": "ed509de8786ee7b4ba0febf32d0c87f7", "text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.", "title": "" }, { "docid": "90f188c1f021c16ad7c8515f1244c08a", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "895d5b01e984ef072b834976e0dfe378", "text": "Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the GromovWasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.", "title": "" }, { "docid": "caf866341ad9f74b1ac1dc8572f6e95c", "text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.", "title": "" }, { "docid": "ec26505d813ed98ac3f840ea54358873", "text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.", "title": "" }, { "docid": "06ba0cd00209a7f4f200395b1662003e", "text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.", "title": "" }, { "docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a", "text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.", "title": "" }, { "docid": "5daeccb1a01df4f68f23c775828be41d", "text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.", "title": "" } ]
scidocsrr
d956c35ab4e217a8c4517f565197d4a9
Pressure ulcer prevention and healing using alternating pressure mattress at home: the PARESTRY project.
[ { "docid": "511c90eadbbd4129fdf3ee9e9b2187d3", "text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.", "title": "" }, { "docid": "df5c384e9fb6ba57a5bbd7fef44ce5f0", "text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.", "title": "" } ]
[ { "docid": "0e60cb8f9147f5334c3cfca2880c2241", "text": "The quest for automatic Programming is the holy grail of artificial intelligence. The dream of having computer programs write other useful computer programs has haunted researchers since the nineteen fifties. In Genetic Progvamming III Darwinian Invention and Problem Solving (GP?) by John R. Koza, Forest H. Bennet 111, David Andre, and Martin A. Keane, the authors claim that the first inscription on this trophy should be the name Genetic Programming (GP). GP is about applying evolutionary algorithms to search the space of computer programs. The authors paraphrase Arthur Samuel of 1959 and argue that with this method it is possible to tell the computer what to do without telling it explicitly how t o do it.", "title": "" }, { "docid": "9001f640ae3340586f809ab801f78ec0", "text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.", "title": "" }, { "docid": "6a15a0a0b9b8abc0e66fa9702cc3a573", "text": "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.", "title": "" }, { "docid": "dae877409dca88fc6fed5cf6536e65ad", "text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.", "title": "" }, { "docid": "a5f17126a90b45921f70439ff96a0091", "text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.", "title": "" }, { "docid": "4cdef79370abcd380357c8be92253fa5", "text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.", "title": "" }, { "docid": "cc90d1ac6aa63532282568f66ecd25fd", "text": "Melphalan has been used in the treatment of various hematologic malignancies for almost 60 years. Today it is part of standard therapy for multiple myeloma and also as part of myeloablative regimens in association with autologous allogenic stem cell transplantation. Melflufen (melphalan flufenamide ethyl ester, previously called J1) is an optimized derivative of melphalan providing targeted delivery of active metabolites to cells expressing aminopeptidases. The activity of melflufen has compared favorably with that of melphalan in a series of in vitro and in vivo experiments performed preferentially on different solid tumor models and multiple myeloma. Melflufen is currently being evaluated in a clinical phase I/II trial in relapsed or relapsed and refractory multiple myeloma. Cytotoxicity of melflufen was assayed in lymphoma cell lines and in primary tumor cells with the Fluorometric Microculture Cytotoxicity Assay and cell cycle analyses was performed in two of the cell lines. Melflufen was also investigated in a xenograft model with subcutaneous lymphoma cells inoculated in mice. Melflufen showed activity with cytotoxic IC50-values in the submicromolar range (0.011-0.92 μM) in the cell lines, corresponding to a mean of 49-fold superiority (p < 0.001) in potency vs. melphalan. In the primary cultures melflufen yielded slightly lower IC50-values (2.7 nM to 0.55 μM) and an increased ratio vs. melphalan (range 13–455, average 108, p < 0.001). Treated cell lines exhibited a clear accumulation in the G2/M-phase of the cell cycle. Melflufen also showed significant activity and no, or minimal side effects in the xenografted animals. This study confirms previous reports of a targeting related potency superiority of melflufen compared to that of melphalan. Melflufen was active in cell lines and primary cultures of lymphoma cells, as well as in a xenograft model in mice and appears to be a candidate for further evaluation in the treatment of this group of malignant diseases.", "title": "" }, { "docid": "b3f5176f49b467413d172134b1734ed8", "text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.", "title": "" }, { "docid": "1768ecf6a2d8a42ea701d7f242edb472", "text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.", "title": "" }, { "docid": "be9971903bf3d754ed18cc89cf254bd1", "text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.", "title": "" }, { "docid": "43233e45f07b80b8367ac1561356888d", "text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.", "title": "" }, { "docid": "65b2d6ea5e1089c52378b4fd6386224c", "text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.", "title": "" }, { "docid": "ffa5ae359807884c2218b92d2db2a584", "text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.", "title": "" }, { "docid": "9bce495ed14617fe05086f06be8279e0", "text": "In previous chapters we reviewed Bayesian neural networks (BNNs) and historical techniques for approximate inference in these, as well as more recent approaches. We discussed the advantages and disadvantages of different techniques, examining their practicality. This, perhaps, is the most important aspect of modern techniques for approximate inference in BNNs. The field of deep learning is pushed forward by practitioners, working on real-world problems. Techniques which cannot scale to complex models with potentially millions of parameters, scale well with large amounts of data, need well studied models to be radically changed, or are not accessible to engineers, will simply perish. In this chapter we will develop on the strand of work of [Graves, 2011; Hinton and Van Camp, 1993], but will do so from the Bayesian perspective rather than the information theory one. Developing Bayesian approaches to deep learning, we will tie approximate BNN inference together with deep learning stochastic regularisation techniques (SRTs) such as dropout. These regularisation techniques are used in many modern deep learning tools, allowing us to offer a practical inference technique. We will start by reviewing in detail the tools used by [Graves, 2011]. We extend on these with recent research, commenting and analysing the variance of several stochastic estimators in variational inference (VI). Following that we will tie these derivations to SRTs, and propose practical techniques to obtain model uncertainty, even from existing models. We finish the chapter by developing specific examples for image based models (CNNs) and sequence based models (RNNs). These will be demonstrated in chapter 5, where we will survey recent research making use of the suggested tools in real-world problems.", "title": "" }, { "docid": "87b67f9ed23c27a71b6597c94ccd6147", "text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.", "title": "" }, { "docid": "56ff9c1be08569b6a881b070b0173797", "text": "This paper examines a set of commercially representative embedded programs and compares them to an existing benchmark suite, SPEC2000. A new version of SimpleScalar that has been adapted to the ARM instruction set is used to characterize the performance of the benchmarks using configurations similar to current and next generation embedded processors. Several characteristics distinguish the representative embedded programs from the existing SPEC benchmarks including instruction distribution, memory behavior, and available parallelism. The embedded benchmarks, called MiBench, are freely available to all researchers.", "title": "" }, { "docid": "ef598ba4f9a4df1f42debc0eabd1ead8", "text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "fb89fd2d9bf526b8bc7f1433274859a6", "text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes", "title": "" }, { "docid": "8cb5659bdbe9d376e2a3b0147264d664", "text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.", "title": "" } ]
scidocsrr
9ed69e982cc40429518a3be5270ec540
Population validity for educational data mining models: A case study in affect detection
[ { "docid": "892c75c6b719deb961acfe8b67b982bb", "text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.", "title": "" } ]
[ { "docid": "ffd45fa5cd9c2ce6b4dc7c5433864fd4", "text": "AIM\nTo evaluate validity of the Greek version of a global measure of perceived stress PSS-14 (Perceived Stress Scale - 14 item).\n\n\nMATERIALS AND METHODS\nThe original PSS-14 (theoretical range 0-56) was translated into Greek and then back-translated. One hundred men and women (39 +/- 10 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS-14 and, then they were interviewed by a psychologist specializing in stress management. Cronbach's alpha (a) evaluated internal consistency of the measurement, whereas Kendall's tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale.\n\n\nRESULTS\nMean (SD) PSS-14 score was 25(7.9). Strong internal consistency (Cronbach's alpha = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS-14 (Kendall's tau-b = 0.43, p < 0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p < 0.001), absolute fix indexes were good (i.e. GFI = 0.733, AGFI = 0.529), and incremental fix indexes were also adequate (i.e. NFI = 0.89 and CFI = 0.92).\n\n\nCONCLUSION\nThe developed Greek version of PSS-14 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties.", "title": "" }, { "docid": "340a2fd43f494bb1eba58629802a738c", "text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.", "title": "" }, { "docid": "f70bd0a47eac274a1bb3b964f34e0a63", "text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.", "title": "" }, { "docid": "2e93d2ba94e0c468634bf99be76706bb", "text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.", "title": "" }, { "docid": "6f13d2d8e511f13f6979859a32e68fdd", "text": "As an innovative measurement technique, the so-called Fiber Bragg Grating (FBG) sensors are used to measure local and global strains in a growing number of application scenarios. FBGs facilitate a reliable method to sense strain over large distances and in explosive atmospheres. Currently, there is only little knowledge available concerning mechanical properties of FGBs, e.g. under quasi-static, cyclic and thermal loads. To address this issue, this work quantifies typical loads on FGB sensors in operating state and moreover aims to determine their mechanical response resulting from certain load cases. Copyright © 2013 IFSA.", "title": "" }, { "docid": "2dde173faac8d5cbb63aed8d379308fa", "text": "Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape. Recently, fully-convolutional neural networks (CNN), in particular those based on U-Net [27], have led to improved performances for this task [7]. In this work, we propose a novel architecture that improves standard U-Net based methods in three important ways. First, instead of combining the available image modalities at the input, each of them is processed in a different path to better exploit their unique information. Moreover, the network is densely-connected (i.e., each layer is connected to all following layers), both within each path and across different paths, similar to HyperDenseNet [11]. This gives our model the freedom to learn the scale at which modalities should be processed and combined. Finally, inspired by the Inception architecture [32], we improve standard U-Net modules by extending inception modules with two convolutional blocks with dilated convolutions of different scale. This helps handling the variability in lesion sizes. We split the 93 stroke datasets into training and validation sets containing 83 and 9 examples respectively. Our network was trained on a NVidia TITAN XP GPU with 16 GBs RAM, using ADAM as optimizer and a learning rate of 1×10−5 during 200 epochs. Training took around 5 hours and segmentation of a whole volume took between 0.2 and 2 seconds, as average. The performance on the test set obtained by our method is compared to several baselines, to demonstrate the effectiveness of our architecture, and to a state-of-art architecture that employs factorized dilated convolutions, i.e., ERFNet [26].", "title": "" }, { "docid": "ed0f70e6e53666a6f5562cfb082a9a9a", "text": "Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems.", "title": "" }, { "docid": "4b051e3908eabb5f550094ebabf6583d", "text": "This paper presents a review of modern cooling system employed for the thermal management of power traction machines. Various solutions for heat extractions are described: high thermal conductivity insulation materials, spray cooling, high thermal conductivity fluids, combined liquid and air forced convection, and loss mitigation techniques.", "title": "" }, { "docid": "9cad66a6f3cfb1112a4072de71c6de3e", "text": "This paper presents a novel method for position sensorless control of high-speed brushless DC motors with low inductance and nonideal back electromotive force (EMF) in order to improve the reliability of the motor system of a magnetically suspended control moment gyro for space application. The commutation angle error of the traditional line-to-line voltage zero-crossing points detection method is analyzed. Based on the characteristics measurement of the nonideal back EMF, a two-stage commutation error compensation method is proposed to achieve the high-reliable and high-accurate commutation in the operating speed region of the proposed sensorless control process. The commutation angle error is compensated by the transformative line voltages, the hysteresis comparators, and the appropriate design of the low-pass filters in the low-speed and high-speed region, respectively. High-precision commutations are achieved especially in the high-speed region to decrease the motor loss in steady state. The simulated and experimental results show that the proposed method can achieve an effective compensation effect in the whole operating speed region.", "title": "" }, { "docid": "beba751220fc4f8df7be8d8e546150d0", "text": "Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two LASER range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end.", "title": "" }, { "docid": "817f9509afcdbafc60ecac2d0b8ef02d", "text": "Abstract—In most regards, the twenty-first century may not bring revolutionary changes in electronic messaging technology in terms of applications or protocols. Security issues that have long been a concern in messaging application are finally being solved using a variety of products. Web-based messaging systems are rapidly evolving the text-based conversation. The users have the right to protect their privacy from the eavesdropper, or other parties which interferes the privacy of the users for such purpose. The chatters most probably use the instant messages to chat with others for personal issue; in which no one has the right eavesdrop the conversation channel and interfere this privacy. This is considered as a non-ethical manner and the privacy of the users should be protected. The author seeks to identify the security features for most public instant messaging services used over the internet and suggest some solutions in order to encrypt the instant messaging over the conversation channel. The aim of this research is to investigate through forensics and sniffing techniques, the possibilities of hiding communication using encryption to protect the integrity of messages exchanged. Authors used different tools and methods to run the investigations. Such tools include Wireshark packet sniffer, Forensics Tool Kit (FTK) and viaForensic mobile forensic toolkit. Finally, authors will report their findings on the level of security that encryption could provide to instant messaging services.", "title": "" }, { "docid": "90dd589be3f8f78877367486e0f66e11", "text": "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.", "title": "" }, { "docid": "29a2c5082cf4db4f4dde40f18c88ca85", "text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.", "title": "" }, { "docid": "f4cbdcdb55e2bf49bcc62a79293f19b7", "text": "Network slicing for 5G provides Network-as-a-Service (NaaS) for different use cases, allowing network operators to build multiple virtual networks on a shared infrastructure. With network slicing, service providers can deploy their applications and services flexibly and quickly to accommodate diverse services’ specific requirements. As an emerging technology with a number of advantages, network slicing has raised many issues for the industry and academia alike. Here, the authors discuss this technology’s background and propose a framework. They also discuss remaining challenges and future research directions.", "title": "" }, { "docid": "029c5753adfbdcbfc38b92fbcc7f7e5c", "text": "The Internet of Things (IoT) is the latest evolution of the Internet, encompassing an enormous number of connected physical \"things.\" The access-control oriented (ACO) architecture was recently proposed for cloud-enabled IoT, with virtual objects (VOs) and cloud services in the middle layers. A central aspect of ACO is to control communication among VOs. This paper develops operational and administrative access control models for this purpose, assuming topic-based publishsubscribe interaction among VOs. Operational models are developed using (i) access control lists for topics and capabilities for virtual objects and (ii) attribute-based access control, and it is argued that role-based access control is not suitable for this purpose. Administrative models for these two operational models are developed using (i) access control lists, (ii) role-based access control, and (iii) attribute-based access control. A use case illustrates the details of these access control models for VO communication, and their differences. An assessment of these models with respect to security and privacy preserving objectives of IoT is also provided.", "title": "" }, { "docid": "9fd56a2261ade748404fcd0c6302771a", "text": "Despite limited scientific knowledge, stretching of human skeletal muscle to improve flexibility is a widespread practice among athletes. This article reviews recent findings regarding passive properties of the hamstring muscle group during stretch based on a model that was developed which could synchronously and continuously measure passive hamstring resistance and electromyographic activity, while the velocity and angle of stretch was controlled. Resistance to stretch was defined as passive torque (Nm) offered by the hamstring muscle group during passive knee extension using an isokinetic dynamometer with a modified thigh pad. To simulate a clinical static stretch, the knee was passively extended to a pre-determined final position (0.0875 rad/s, dynamic phase) where it remained stationary for 90 s (static phase). Alternatively, the knee was extended to the point of discomfort (stretch tolerance). From the torque-angle curve of the dynamic phase of the static stretch, and in the stretch tolerance protocol, passive energy and stiffness were calculated. Torque decline in the static phase was considered to represent viscoelastic stress relaxation. Using the model, studies were conducted which demonstrated that a single static stretch resulted in a 30% viscoelastic stress relaxation. With repeated stretches muscle stiffness declined, but returned to baseline values within 1 h. Long-term stretching (3 weeks) increased joint range of motion as a result of a change in stretch tolerance rather than in the passive properties. Strength training resulted in increased muscle stiffness, which was unaffected by daily stretching. The effectiveness of different stretching techniques was attributed to a change in stretch tolerance rather than passive properties. Inflexible and older subjects have increased muscle stiffness, but a lower stretch tolerance compared to subjects with normal flexibility and younger subjects, respectively. Although far from all questions regarding the passive properties of humans skeletal muscle have been answered in these studies, the measurement technique permitted some initial important examinations of vicoelastic behavior of human skeletal muscle.", "title": "" }, { "docid": "2d94f76a2c79b36c3fa8aeaf3f574bbd", "text": "In this paper I discuss the role of Machine Learning (ML) in sound design. I focus on the modelling of a particular aspect of human intelligence which is believed to play an important role in musical creativity: the Generalisation of Perceptual Attributes (GPA). By GPA I mean the process by which a listener tries to find common sound attributes when confronted with a series of sounds. The paper introduces the basics of GPA and ML in the context of ARTIST, a prototype case study system. ARTIST (Artificial Intelligence Sound Tools) is a sound design system that works in co-operation with the user, providing useful levels of automated reasoning to render the synthesis tasks less laborious (tasks such as calculating an appropriate stream of synthesis parameters for each single sound) and to enable the user to explore alternatives when designing a certain sound. The system synthesises sounds from input requests in a relatively high-level language; for instance, using attribute-value expressions such as \"normal vibrato\", \"high openness\" and \"sharp attack\". ARTIST stores information about sounds as clusters of attribute-value expressions and has the ability to interpret these expressions in the lower-level terms of sound synthesis algorithms. The user may, however, be interested in producing a sound which is \"unknown\" to the system. In this case, the system will attempt to compute the attribute values for this yet unknown sound by making analogies with other known sounds which have similar constituents. ARTIST uses ML to infer which sound attributes should be considered to make the analogies.", "title": "" }, { "docid": "20f6a794edae8857a04036afc84f532e", "text": "Genetic algorithms play a significant role, as search techniques forhandling complex spaces, in many fields such as artificial intelligence, engineering, robotic, etc. Genetic algorithms are based on the underlying genetic process in biological organisms and on the naturalevolution principles of populations. These algorithms process apopulation of chromosomes, which represent search space solutions,with three operations: selection, crossover and mutation. Under its initial formulation, the search space solutions are coded using the binary alphabet. However, the good properties related with these algorithms do not stem from the use of this alphabet; other coding types have been considered for the representation issue, such as real coding, which would seem particularly natural when tackling optimization problems of parameters with variables in continuous domains. In this paper we review the features of real-coded genetic algorithms. Different models of genetic operators and some mechanisms available for studying the behaviour of this type of genetic algorithms are revised and compared.", "title": "" }, { "docid": "91713d85bdccb2c06d7c50365bd7022c", "text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).", "title": "" }, { "docid": "503756888df43d745e4fb5051f8855fb", "text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.", "title": "" } ]
scidocsrr
d2948c21194cbc2254fd8603d3702a81
RaptorX-Property: a web server for protein structure property prediction
[ { "docid": "44bd234a8999260420bb2a07934887af", "text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was", "title": "" }, { "docid": "5a1f4efc96538c1355a2742f323b7a0e", "text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.", "title": "" } ]
[ { "docid": "f1e5e00fe3a0610c47918de526e87dc6", "text": "The current paper reviews research that has explored the intergenerational effects of the Indian Residential School (IRS) system in Canada, in which Aboriginal children were forced to live at schools where various forms of neglect and abuse were common. Intergenerational IRS trauma continues to undermine the well-being of today's Aboriginal population, and having a familial history of IRS attendance has also been linked with more frequent contemporary stressor experiences and relatively greater effects of stressors on well-being. It is also suggested that familial IRS attendance across several generations within a family appears to have cumulative effects. Together, these findings provide empirical support for the concept of historical trauma, which takes the perspective that the consequences of numerous and sustained attacks against a group may accumulate over generations and interact with proximal stressors to undermine collective well-being. As much as historical trauma might be linked to pathology, it is not possible to go back in time to assess how previous traumas endured by Aboriginal peoples might be related to subsequent responses to IRS trauma. Nonetheless, the currently available research demonstrating the intergenerational effects of IRSs provides support for the enduring negative consequences of these experiences and the role of historical trauma in contributing to present day disparities in well-being.", "title": "" }, { "docid": "c38dc288a59e39785dfa87f46d2371e5", "text": "Silver molybdate (Ag2MoO4) and silver tungstate (Ag2WO4) nanomaterials were prepared using two complementary methods, microwave assisted hydrothermal synthesis (MAH) (pH 7, 140 °C) and coprecipitation (pH 4, 70 °C), and were then used to prepare two core/shell composites, namely α-Ag2WO4/β-Ag2MoO4 (MAH, pH 4, 140 °C) and β-Ag2MoO4/β-Ag2WO4 (coprecipitation, pH 4, 70 °C). The shape and size of the microcrystals were observed by field emission scanning electron microscopy (FE-SEM), different morphologies such as balls and nanorods. These powders were characterized by X-ray powder diffraction and UV-vis (diffuse reflectance and photoluminescence). X-ray diffraction patterns showed that the Ag2MoO4 samples obtained by the two methods were single-phased and belonged to the β-Ag2MoO4 structure (spinel type). In contrast, the Ag2WO4 obtained in the two syntheses were structurally different: MAH exhibited the well-known tetrameric stable structure α-Ag2WO4, while coprecipitation afforded the metastable β-Ag2WO4 allotrope, coexisting with a weak amount of the α-phase. The optical gap of β-Ag2WO4 (3.3 eV) was evaluated for the first time. In contrast to β-Ag2MoO4/β-Ag2WO4, the αAg2WO4/β-Ag2MoO4 exhibited strongly-enhanced photoluminescence in the low-energy band (650 nm), tentatively explained by the creation of a large density of local defects (distortions) at the core-shell interface, due to the presence of two different types of MOx polyhedra in the two structures.", "title": "" }, { "docid": "d8938884a61e7c353d719dbbb65d00d0", "text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.", "title": "" }, { "docid": "ce63aad5288d118eb6ca9d99b96e9cac", "text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.", "title": "" }, { "docid": "c00c6539b78ed195224063bcff16fb12", "text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.", "title": "" }, { "docid": "d6707c10e68dcbb5cde0920631bdaf8b", "text": "Game playing has been an important testbed for artificial intelligence. Board games, first-person shooters, and real-time strategy games have well-defined win conditions and rely on strong feedback from a simulated environment. Text adventures require natural language understanding to progress through the game but still have an underlying simulated environment. In this paper, we propose tabletop roleplaying games as a challenge due to an infinite action space, multiple (collaborative) players and models of the world, and no explicit reward signal. We present an approach for reinforcement learning agents that can play tabletop roleplaying games.", "title": "" }, { "docid": "5411326f95abd20a141ad9e9d3ff72bf", "text": "media files and almost universal use of email, information sharing is almost instantaneous anywhere in the world. Because many of the procedures performed in dentistry represent established protocols that should be read, learned and then practiced, it becomes clear that photography aids us in teaching or explaining to our patients what we think are common, but to them are complex and mysterious procedures. Clinical digital photography. Part 1: Equipment and basic documentation", "title": "" }, { "docid": "ce174b6dce6e2dee62abca03b4a95112", "text": "This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.", "title": "" }, { "docid": "3f33882e4bece06e7a553eb9133f8aa9", "text": "Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy.", "title": "" }, { "docid": "cd877197b06304b379d5caf9b5b89d30", "text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.", "title": "" }, { "docid": "0e521af53f9faf4fee38843a22ec2185", "text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.", "title": "" }, { "docid": "fb4630a6b558ac9b8d8444275e1978e3", "text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.", "title": "" }, { "docid": "12a8d007ca4dce21675ddead705c7b62", "text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.", "title": "" }, { "docid": "cb70ab2056242ca739adde4751fbca2c", "text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1", "title": "" }, { "docid": "b81b29c232fb9cb5dcb2dd7e31003d77", "text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.", "title": "" }, { "docid": "545509f9e3aa65921a7d6faa41247ae6", "text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.", "title": "" }, { "docid": "38f289b085f2c6e2d010005f096d8fd7", "text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.", "title": "" }, { "docid": "7d14bd767964cba3cfc152ee20c7ffbc", "text": "Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DAD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction. Determining models for time series data is important in applications ranging from market prediction to the simulation of chemical processes and robotic systems. Many supervised learning approaches have been proposed for this task, such as neural networks (Narendra and Parthasarathy 1990), Expectation-Maximization (Ghahramani and Roweis 1999; Coates, Abbeel, and Ng 2008), Support Vector Regression (Müller, Smola, and Rätsch 1997), Gaussian process regression (Wang, Hertzmann, and Blei 2005; Ko et al. 2007), Nadaraya-Watson kernel regression (Basharat and Shah 2009), Gaussian mixture models (Khansari-Zadeh and Billard 2011), and Kernel PCA (Ralaivola and D’Alche-Buc 2004). Common to most of these methods is that the objective being optimized is the single-step prediction loss. However, this criterion does not guarantee accurate multiple-step simulation accuracy in which the output of a prediction step is used as input for the next inference. The prevalence of single-step modeling approaches is a result of the difficulty in directly optimizing the multipleCopyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. step prediction error. As an example, consider fitting a simple linear dynamical system model for the multi-step error over the time horizon T from an initial condition x0,", "title": "" }, { "docid": "dd3781fe97c7dd935948c55584313931", "text": "The radiation of RFID antitheft gate system has been simulated in FEKO. The obtained numerical results for the electric field and magnetic field have been compared to the exposure limits proposed by the ICNIRP Guidelines. No significant violation of limits, regarding both occupational and public exposure, has been shown.", "title": "" }, { "docid": "53b32cdb6c3d511180d8cb194c286ef5", "text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.", "title": "" } ]
scidocsrr
381a180ecd74e87262ec5c5be0ccbe97
Facial Action Coding System
[ { "docid": "6b6285cd8512a2376ae331fda3fedf20", "text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.", "title": "" } ]
[ { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "8fc758632346ce45e8f984018cde5ece", "text": "Today Recommendation systems [3] have become indispensible because of the sheer bulk of information made available to a user from web-services(Netflix, IMDB, Amazon and many others) and the need for personalized suggestions. Recommendation systems are a well studied research area. In the following work, we present our study on the Netflix Challenge [1]. The Neflix Challenge can be summarized in the following way: ”Given a movie, predict the rating of a particular user based on the user’s prior ratings”. The performance of all such approaches is measured using the RMSE (root mean-squared error) of the submitted ratings from the actual ratings. Currently, the best system has an RMSE of 0.8616 [2]. We obtained ratings from the following approaches:", "title": "" }, { "docid": "c197198ca45acec2575d5be26fc61f36", "text": "General systems theory has been proposed as a basis for the unification of science. The open systems model has stimulated many new conceptualizations in organization theory and management practice. However, experience in utilizing these concepts suggests many unresolved dilemmas. Contingency views represent a step toward less abstraction, more explicit patterns of relationships, and more applicable theory. Sophistication will come when we have a more complete understanding of organizations as total systems (configurations of subsystems) so that we can prescribe more appropriate organizational designs and managerial systems. Ultimately, organization theory should serve as the foundation for more effective management practice.", "title": "" }, { "docid": "12eff845ccb6e5cc2b2fbe74935aff46", "text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.", "title": "" }, { "docid": "5f20ed750fc260f40d01e8ac5ddb633d", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii CHAPTER", "title": "" }, { "docid": "f1cfd3980bb7dc78309074012be3cf03", "text": "A chatbot is a conversational agent that interacts with users using natural language. Multi chatbots are available to serve in different domains. However, the knowledge base of chatbots is hand coded in its brain. This paper presents an overview of ALICE chatbot, its AIML format, and our experiments to generate different prototypes of ALICE automatically based on a corpus approach. A description of developed software which converts readable text (corpus) into AIML format is presented alongside with describing the different corpora we used. Our trials revealed the possibility of generating useful prototypes without the need for sophisticated natural language processing or complex machine learning techniques. These prototypes were used as tools to practice different languages, to visualize corpus, and to provide answers for questions.", "title": "" }, { "docid": "22ad4568fbf424592c24783fb3037f62", "text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.", "title": "" }, { "docid": "34bfec0f1f7eb748b3632bbf288be3bd", "text": "An omnidirectional mobile robot is able, kinematically, to move in any direction regardless of current pose. To date, nearly all designs and analyses of omnidirectional mobile robots have considered the case of motion on flat, smooth terrain. In this paper, an investigation of the design and control of an omnidirectional mobile robot for use in rough terrain is presented. Kinematic and geometric properties of the active split offset caster drive mechanism are investigated along with system and subsystem design guidelines. An optimization method is implemented to explore the design space. The use of this method results in a robot that has higher mobility than a robot designed using engineering judgment. A simple kinematic controller that considers the effects of terrain unevenness via an estimate of the wheel-terrain contact angles is also presented. It is shown in simulation that under the proposed control method, near-omnidirectional tracking performance is possible even in rough, uneven terrain. DOI: 10.1115/1.4000214", "title": "" }, { "docid": "e364db9141c85b1f260eb3a9c1d42c5b", "text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557", "title": "" }, { "docid": "abdffec5ea2b05b61006cc7b6b295976", "text": "Making recommendation requires predicting what is of interest to a user at a specific time. Even the same user may have different desires at different times. It is important to extract the aggregate interest of a user from his or her navigational path through the site in a session. This paper concentrates on the discovery and modelling of the user’s aggregate interest in a session. This approach relies on the premise that the visiting time of a page is an indicator of the user’s interest in that page. The proportion of times spent in a set of pages requested by the user within a single session forms the aggregate interest of that user in that session. We first partition user sessions into clusters such that only sessions which represent similar aggregate interest of users are placed in the same cluster. We employ a model-based clustering approach and partition user sessions according to similar amount of time in similar pages. In particular, we cluster sessions by learning a mixture of Poisson models using Expectation Maximization algorithm. The resulting clusters are then used to recommend pages to a user that are most likely contain the information which is of interest to that user at that time. Although the approach does not use the sequential patterns of transactions, experimental evaluation shows that the approach is quite effective in capturing a Web user’s access pattern. The model has an advantage over previous proposals in terms of speed and memory usage.", "title": "" }, { "docid": "53b48550158b06dfbdb8c44a4f7241c6", "text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.", "title": "" }, { "docid": "f3b0bace6028b3d607618e2e53294704", "text": "State-of-the art spoken language understanding models that automatically capture user intents in human to machine dialogs are trained with manually annotated data, which is cumbersome and time-consuming to prepare. For bootstrapping the learning algorithm that detects relations in natural language queries to a conversational system, one can rely on publicly available knowledge graphs, such as Freebase, and mine corresponding data from the web. In this paper, we present an unsupervised approach to discover new user intents using a novel Bayesian hierarchical graphical model. Our model employs search query click logs to enrich the information extracted from bootstrapped models. We use the clicked URLs as implicit supervision and extend the knowledge graph based on the relational information discovered from this model. The posteriors from the graphical model relate the newly discovered intents with the search queries. These queries are then used as additional training examples to complement the bootstrapped relation detection models. The experimental results demonstrate the effectiveness of this approach, showing extended coverage to new intents without impacting the known intents.", "title": "" }, { "docid": "6efdf43a454ce7da51927c07f1449695", "text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.", "title": "" }, { "docid": "296025d4851569031f0ebe36d792fadc", "text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.", "title": "" }, { "docid": "496ba5ee48281afe48b5afce02cc4dbf", "text": "OBJECTIVE\nThis study examined the relationship between reported exposure to child abuse and a history of parental substance abuse (alcohol and drugs) in a community sample in Ontario, Canada.\n\n\nMETHOD\nThe sample consisted of 8472 respondents to the Ontario Mental Health Supplement (OHSUP), a comprehensive population survey of mental health. The association of self-reported retrospective childhood physical and sexual abuse and parental histories of drug or alcohol abuse was examined.\n\n\nRESULTS\nRates of physical and sexual abuse were significantly higher, with a more than twofold increased risk among those reporting parental substance abuse histories. The rates were not significantly different between type or severity of abuse. Successively increasing rates of abuse were found for those respondents who reported that their fathers, mothers or both parents had substance abuse problems; this risk was significantly elevated for both parents compared to father only with substance abuse problem.\n\n\nCONCLUSIONS\nParental substance abuse is associated with a more than twofold increase in the risk of exposure to both childhood physical and sexual abuse. While the mechanism for this association remains unclear, agencies involved in child protection or in treatment of parents with substance abuse problems must be cognizant of this relationship and focus on the development of interventions to serve these families.", "title": "" }, { "docid": "461ec14463eb20962ef168de781ac2a2", "text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.", "title": "" }, { "docid": "eae289c213d5b67d91bb0f461edae7af", "text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China", "title": "" }, { "docid": "0562b3b1692f07060cf4eeb500ea6cca", "text": "As the volume of medicinal information stored electronically increase, so do the need to enhance how it is secured. The inaccessibility to patient record at the ideal time can prompt death toll and also well degrade the level of health care services rendered by the medicinal professionals. Criminal assaults in social insurance have expanded by 125% since 2010 and are now the leading cause of medical data breaches. This study therefore presents the combination of 3DES and LSB to improve security measure applied on medical data. Java programming language was used to develop a simulation program for the experiment. The result shows medical data can be stored, shared, and managed in a reliable and secure manner using the combined model. Keyword: Information Security; Health Care; 3DES; LSB; Cryptography; Steganography 1.0 INTRODUCTION In health industries, storing, sharing and management of patient information have been influenced by the current technology. That is, medical centres employ electronical means to support their mode of service in order to deliver quality health services. The importance of the patient record cannot be over emphasised as it contributes to when, where, how, and how lives can be saved. About 91% of health care organizations have encountered no less than one data breach, costing more than $2 million on average per organization [1-3]. Report also shows that, medical records attract high degree of importance to hoodlums compare to Mastercard information because they infer more cash base on the fact that bank", "title": "" }, { "docid": "fcdde2f5b55b6d8133e6dea63d61b2c8", "text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ·&middot·· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce", "title": "" }, { "docid": "05cea038adce7f5ae2a09a7fd5e024a7", "text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.", "title": "" } ]
scidocsrr
b35e238b5c76fec76d33eb3e0dae3c06
Using trust for collaborative filtering in eCommerce
[ { "docid": "6c3f320eda59626bedb2aad4e527c196", "text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.", "title": "" }, { "docid": "da63c4d9cc2f3278126490de54c34ce5", "text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.", "title": "" } ]
[ { "docid": "c077231164a8a58f339f80b83e5b4025", "text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.", "title": "" }, { "docid": "6a5abcabca3d4bb0696a9f19dd5e358f", "text": "Distributional models of meaning (see Turney and Pantel (2010) for an overview) are based on the pragmatic hypothesis that meanings of words are deducible from the contexts in which they are often used. This hypothesis is formalized using vector spaces, wherein a word is represented as a vector of cooccurrence statistics with a set of context dimensions. With the increasing availability of large corpora of text, these models constitute a well-established NLP technique for evaluating semantic similarities. Their methods however do not scale up to larger text constituents (i.e. phrases and sentences), since the uniqueness of multi-word expressions would inevitably lead to data sparsity problems, hence to unreliable vectorial representations. The problem is usually addressed by the provision of a compositional function, the purpose of which is to prepare a vector for a phrase or sentence by combining the vectors of the words therein. This line of research has led to the field of compositional distributional models of meaning (CDMs), where reliable semantic representations are provided for phrases, sentences, and discourse units such as dialogue utterances and even paragraphs or documents. As a result, these models have found applications in various NLP tasks, for example paraphrase detection; sentiment analysis; dialogue act tagging; machine translation; textual entailment; and so on, in many cases presenting stateof-the-art performance. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. The topic has inspired a number of workshops and tutorials in top CL conferences such as ACL and EMNLP, special issues at high-profile journals, and it attracts a substantial amount of submissions in annual NLP conferences. The approaches employed by CDMs are as much as diverse as statistical machine leaning (Baroni and Zamparelli, 2010), linear algebra (Mitchell and Lapata, 2010), simple category theory (Coecke et al., 2010), or complex deep learning architectures based on neural networks and borrowing ideas from image processing (Socher et al., 2012; Kalchbrenner et al., 2014; Cheng and Kartsaklis, 2015). Furthermore, they create opportunities for interesting novel research, related for example to efficient methods for creating tensors for relational words such as verbs and adjectives (Grefenstette and Sadrzadeh, 2011), the treatment of logical and functional words in a distributional setting (Sadrzadeh et al., 2013; Sadrzadeh et al., 2014), or the role of polysemy and the way it affects composition (Kartsaklis and Sadrzadeh, 2013; Cheng and Kartsaklis, 2015). The purpose of this tutorial is to provide a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail. The goal is to allow the student to understand the general philosophy of each approach, as well as its advantages and limitations with regard to the other alternatives.", "title": "" }, { "docid": "6ae4be7a85f7702ae76649d052d7c37d", "text": "information technologies as “the ability to reformulate knowledge, to express oneself creatively and appropriately, and to produce and generate information (rather than simply to comprehend it).” Fluency, according to the report, “goes beyond traditional notions of computer literacy...[It] requires a deeper, more essential understanding and mastery of information technology for information processing, communication, and problem solving than does computer literacy as traditionally defined.” Scratch is a networked, media-rich programming environment designed to enhance the development of technological fluency at after-school centers in economically-disadvantaged communities. Just as the LEGO MindStorms robotics kit added programmability to an activity deeply rooted in youth culture (building with LEGO bricks), Scratch adds programmability to the media-rich and network-based activities that are most popular among youth at afterschool computer centers. Taking advantage of the extraordinary processing power of current computers, Scratch supports new programming paradigms and activities that were previously infeasible, making it better positioned to succeed than previous attempts to introduce programming to youth. In the past, most initiatives to improve technological fluency have focused on school classrooms. But there is a growing recognition that after-school centers and other informal learning settings can play an important role, especially in economicallydisadvantaged communities, where schools typically have few technological resources and many young people are alienated from the formal education system. Our working hypothesis is that, as kids work on personally meaningful Scratch projects such as animated stories, games, and interactive art, they will develop technological fluency, mathematical and problem solving skills, and a justifiable selfconfidence that will serve them well in the wider spheres of their lives. During the past decade, more than 2000 community technology centers (CTCs) opened in the United States, specifically to provide better access to technology in economically-disadvantaged communities. But most CTCs support only the most basic computer activities such as word processing, email, and Web browsing, so participants do not gain the type of fluency described in the NRC report. Similarly, many after-school centers (which, unlike CTCs, focus exclusively on youth) have begun to introduce computers, but they too tend to offer only introductory computer activities, sometimes augmented by educational games.", "title": "" }, { "docid": "6c018b35bf2172f239b2620abab8fd2f", "text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.", "title": "" }, { "docid": "1ebb333d5a72c649cd7d7986f5bf6975", "text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand­ ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under­ stand. The notion of plans is introduced to ac­ count for general knowledge about novel situa­ tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at­ tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel­ op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre­ sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po­ sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys­ tem or any working computational system the res­ triction of world knowledge need not critically concern him. Our feeling is that an effective characteri­ zation of knowledge can result in a real under­ standing system in the not too distant future. We expect that programs based on the theory we out­ …", "title": "" }, { "docid": "8a5bbfcb8084c0b331e18dcf64cdf915", "text": "This paper describes wildcards, a new language construct designed to increase the flexibility of object-oriented type systems with parameterized classes. Based on the notion of use-site variance, wildcards provide a type safe abstraction over different instantiations of parameterized classes, by using '?' to denote unspecified type arguments. Thus they essentially unify the distinct families of classes often introduced by parametric polymorphism. Wildcards are implemented as part of the upcoming addition of generics to the Java™ programming language, and will thus be deployed world-wide as part of the reference implementation of the Java compiler javac available from Sun Microsystems, Inc. By providing a richer type system, wildcards allow for an improved type inference scheme for polymorphic method calls. Moreover, by means of a novel notion of wildcard capture, polymorphic methods can be used to give symbolic names to unspecified types, in a manner similar to the \"open\" construct known from existential types. Wildcards show up in numerous places in the Java Platform APIs of the upcoming release, and some of the examples in this paper are taken from these APIs.", "title": "" }, { "docid": "1912f9ad509e446d3e34e3c6dccd4c78", "text": "Lumbar disc herniation is a common male disease. In the past, More academic attention was directed to its relationship with lumbago and leg pain than to its association with andrological diseases. Studies show that central lumber intervertebral disc herniation may cause cauda equina injury and result in premature ejaculation, erectile dysfunction, chronic pelvic pain syndrome, priapism, and emission. This article presents an overview on the correlation between central lumbar intervertebral disc herniation and andrological diseases, focusing on the aspects of etiology, pathology, and clinical progress, hoping to invite more attention from andrological and osteological clinicians.", "title": "" }, { "docid": "55b88b38dbde4d57fddb18d487099fc6", "text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f82a57baca9a0381c9b2af0368a5531e", "text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.", "title": "" }, { "docid": "e4a74019c34413f8ace000512ab26da0", "text": "Scaling the transaction throughput of decentralized blockchain ledgers such as Bitcoin and Ethereum has been an ongoing challenge. Two-party duplex payment channels have been designed and used as building blocks to construct linked payment networks, which allow atomic and trust-free payments between parties without exhausting the resources of the blockchain.\n Once a payment channel, however, is depleted (e.g., because transactions were mostly unidirectional) the channel would need to be closed and re-funded to allow for new transactions. Users are envisioned to entertain multiple payment channels with different entities, and as such, instead of refunding a channel (which incurs costly on-chain transactions), a user should be able to leverage his existing channels to rebalance a poorly funded channel.\n To the best of our knowledge, we present the first solution that allows an arbitrary set of users in a payment channel network to securely rebalance their channels, according to the preferences of the channel owners. Except in the case of disputes (similar to conventional payment channels), our solution does not require on-chain transactions and therefore increases the scalability of existing blockchains. In our security analysis, we show that an honest participant cannot lose any of its funds while rebalancing. We finally provide a proof of concept implementation and evaluation for the Ethereum network.", "title": "" }, { "docid": "fc3283b1d81de45772ec730c1f5185f1", "text": "In this paper, three different techniques which can be used for control of three phase PWM Rectifier are discussed. Those three control techniques are Direct Power Control, Indirect Power Control or Voltage Oriented Control and Hysteresis Control. The main aim of this paper is to compare and establish the merits and demerits of each technique in various aspects mainly regarding switching frequency hence switching loss, computation and transient state behavior. Each control method is studied in detail and simulated using Matlab/Simulink in order to make the comparison.", "title": "" }, { "docid": "ee045772d55000b6f2d3f7469a4161b1", "text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses", "title": "" }, { "docid": "f9c938a98621f901c404d69a402647c7", "text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.", "title": "" }, { "docid": "16d2e0605d45c69302c71b8434b7a23a", "text": "Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function.", "title": "" }, { "docid": "a18da0c7d655fee44eebdf61c7371022", "text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.", "title": "" }, { "docid": "550e19033cb00938aed89eb3cce50a76", "text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.", "title": "" }, { "docid": "1615e93f027c6f6f400ce1cc7a1bb8aa", "text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other", "title": "" }, { "docid": "82fdd14f7766e8afe9b11a255073b3ce", "text": "We develop a stochastic model of a simple protocol for the self-configuration of IP network interfaces. We describe the mean cost that incurs during a selfconfiguration phase and describe a trade-off between reliability and speed. We derive a cost function which we use to derive optimal parameters. We show that optimal cost and optimal reliability are qualities that cannot be achieved at the same time. Keywords—Embedded control software; IP; zeroconf protocol; cost optimisation", "title": "" }, { "docid": "7a62e5a78eabbcbc567d5538a2f35434", "text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt", "title": "" } ]
scidocsrr
20c49ce8a94be9f93d4a86ed7e1f84b6
Context-Aware Correlation Filter Tracking
[ { "docid": "d349cf385434027b4532080819d5745f", "text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "title": "" }, { "docid": "aee250663a05106c4c0fad9d0f72828c", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.", "title": "" } ]
[ { "docid": "49736d49ee7b777523064efcd99c5cbb", "text": "Immune checkpoint antagonists (CTLA-4 and PD-1/PD-L1) and CAR T-cell therapies generate unparalleled durable responses in several cancers and have firmly established immunotherapy as a new pillar of cancer therapy. To extend the impact of immunotherapy to more patients and a broader range of cancers, targeting additional mechanisms of tumor immune evasion will be critical. Adenosine signaling has emerged as a key metabolic pathway that regulates tumor immunity. Adenosine is an immunosuppressive metabolite produced at high levels within the tumor microenvironment. Hypoxia, high cell turnover, and expression of CD39 and CD73 are important factors in adenosine production. Adenosine signaling through the A2a receptor expressed on immune cells potently dampens immune responses in inflamed tissues. In this article, we will describe the role of adenosine signaling in regulating tumor immunity, highlighting potential therapeutic targets in the pathway. We will also review preclinical data for each target and provide an update of current clinical activity within the field. Together, current data suggest that rational combination immunotherapy strategies that incorporate inhibitors of the hypoxia-CD39-CD73-A2aR pathway have great promise for further improving clinical outcomes in cancer patients.", "title": "" }, { "docid": "721ff703dfafad6b1b330226c36ed641", "text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.", "title": "" }, { "docid": "6420f394cb02e9415b574720a9c64e7f", "text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.", "title": "" }, { "docid": "9a2d79d9df9e596e26f8481697833041", "text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.", "title": "" }, { "docid": "9ed5fdb991edd5de57ffa7f13121f047", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "8c0588538b1b04193e80ef5ce5ad55a7", "text": "Unlike traditional bipolar constrained liners, the Osteonics Omnifit constrained acetabular insert is a tripolar device, consisting of an inner bipolar bearing articulating within an outer, true liner. Every reported failure of the Omnifit tripolar implant has been by failure at the shell-bone interface (Type I failure), failure at the shell-liner interface (Type II failure), or failure of the locking mechanism resulting in dislocation of the bipolar-liner interface (Type III failure). In this report we present two cases of failure of the Omnifit tripolar at the bipolar-femoral head interface. To our knowledge, these are the first reported cases of failure at the bipolar-femoral head interface (Type IV failure). In addition, we described the first successful closed reduction of a Type IV failure.", "title": "" }, { "docid": "536c739e6f0690580568a242e1d65ef3", "text": "Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks’ sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this article, we review the works relying on decision-making techniques focused on game theory and Markov decision processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision-making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game-theoretic approaches into IDS optimization techniques.", "title": "" }, { "docid": "048cc782baeec3a7f46ef5ee7abf0219", "text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.", "title": "" }, { "docid": "a2f36e0f8abaa07124d446f6aa870491", "text": "We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given images and partial and/or noisy depth and semantic data. We formulate this objective of reconstructing one or more types of scene data using a Multi-modal stacked Auto-Encoder. We show that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities. We demonstrate our method using the outdoor dataset KITTI that includes LIDAR and stereo cameras. Our results show that as a means to estimate depth from a single image, our method is comparable to the state-of-the-art, and can run in real time (i.e., less than 40ms per frame). But we also show that our method has a significant advantage over other methods in that it can seamlessly use additional data that may be available, such as a sparse point-cloud and/or incomplete coarse semantic labels.", "title": "" }, { "docid": "aa30fc0f921509b1f978aeda1140ffc0", "text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.", "title": "" }, { "docid": "d7eb92756c8c3fb0ab49d7b101d96343", "text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.", "title": "" }, { "docid": "74ff09a1d3ca87a0934a1b9095c282c4", "text": "The cancer metastasis suppressor protein KAI1/CD82 is a member of the tetraspanin superfamily. Recent studies have demonstrated that tetraspanins are palmitoylated and that palmitoylation contributes to the organization of tetraspanin webs or tetraspanin-enriched microdomains. However, the effect of palmitoylation on tetraspanin-mediated cellular functions remains obscure. In this study, we found that tetraspanin KAI1/CD82 was palmitoylated when expressed in PC3 metastatic prostate cancer cells and that palmitoylation involved all of the cytoplasmic cysteine residues proximal to the plasma membrane. Notably, the palmitoylation-deficient KAI1/CD82 mutant largely reversed the wild-type KAI1/CD82's inhibitory effects on migration and invasion of PC3 cells. Also, palmitoylation regulates the subcellular distribution of KAI1/CD82 and its association with other tetraspanins, suggesting that the localized interaction of KAI1/CD82 with tetraspanin webs or tetraspanin-enriched microdomains is important for KAI1/CD82's motility-inhibitory activity. Moreover, we found that KAI1/CD82 palmitoylation affected motility-related subcellular events such as lamellipodia formation and actin cytoskeleton organization and that the alteration of these processes likely contributes to KAI1/CD82's inhibition of motility. Finally, the reversal of cell motility seen in the palmitoylation-deficient KAI1/CD82 mutant correlates with regaining of p130(CAS)-CrkII coupling, a signaling step important for KAI1/CD82's activity. Taken together, our results indicate that palmitoylation is crucial for the functional integrity of tetraspanin KAI1/CD82 during the suppression of cancer cell migration and invasion.", "title": "" }, { "docid": "136a2f401b3af00f0f79b991ab65658f", "text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.", "title": "" }, { "docid": "10423f367850761fd17cf1b146361f34", "text": "OBJECTIVE\nDetection and characterization of microcalcification clusters in mammograms is vital in daily clinical practice. The scope of this work is to present a novel computer-based automated method for the characterization of microcalcification clusters in digitized mammograms.\n\n\nMETHODS AND MATERIAL\nThe proposed method has been implemented in three stages: (a) the cluster detection stage to identify clusters of microcalcifications, (b) the feature extraction stage to compute the important features of each cluster and (c) the classification stage, which provides with the final characterization. In the classification stage, a rule-based system, an artificial neural network (ANN) and a support vector machine (SVM) have been implemented and evaluated using receiver operating characteristic (ROC) analysis. The proposed method was evaluated using the Nijmegen and Mammographic Image Analysis Society (MIAS) mammographic databases. The original feature set was enhanced by the addition of four rule-based features.\n\n\nRESULTS AND CONCLUSIONS\nIn the case of Nijmegen dataset, the performance of the SVM was Az=0.79 and 0.77 for the original and enhanced feature set, respectively, while for the MIAS dataset the corresponding characterization scores were Az=0.81 and 0.80. Utilizing neural network classification methodology, the corresponding performance for the Nijmegen dataset was Az=0.70 and 0.76 while for the MIAS dataset it was Az=0.73 and 0.78. Although the obtained high classification performance can be successfully applied to microcalcification clusters characterization, further studies must be carried out for the clinical evaluation of the system using larger datasets. The use of additional features originating either from the image itself (such as cluster location and orientation) or from the patient data may further improve the diagnostic value of the system.", "title": "" }, { "docid": "813a0d47405d133263deba0da6da27a8", "text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.", "title": "" }, { "docid": "e59b203f3b104553a84603240ea467eb", "text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.", "title": "" }, { "docid": "06c3f32f07418575c700e2f0925f4398", "text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.", "title": "" }, { "docid": "a636f977eb29b870cefe040f3089de44", "text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.", "title": "" }, { "docid": "3550dbe913466a675b621d476baba219", "text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.", "title": "" }, { "docid": "be079999e630df22254e7aa8a9ecdcae", "text": "Strokes are one of the leading causes of death and disability in the UK. There are two main types of stroke: ischemic and hemorrhagic, with the majority of stroke patients suffering from the former. During an ischemic stroke, parts of the brain lose blood supply, and if not treated immediately, can lead to irreversible tissue damage and even death. Ischemic lesions can be detected by diffusion weighted magnetic resonance imaging (DWI), but localising and quantifying these lesions can be a time consuming task for clinicians. Work has already been done in training neural networks to segment these lesions, but these frameworks require a large amount of manually segmented 3D images, which are very time consuming to create. We instead propose to use past examinations of stroke patients which consist of DWIs, corresponding radiological reports and diagnoses in order to develop a learning framework capable of localising lesions. This is motivated by the fact that the reports summarise the presence, type and location of the ischemic lesion for each patient, and thereby provide more context than a single diagnostic label. Acute lesions prediction is aided by an attention mechanism which implicitly learns which regions within the DWI are most relevant to the classification.", "title": "" } ]
scidocsrr
0b590d5f3bc41286db3de0ab3bf48308
Neural Models for Key Phrase Extraction and Question Generation
[ { "docid": "8f916f7be3048ae2a367096f4f82207d", "text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.", "title": "" }, { "docid": "86d58f4196ceb48e29cb143e6a157c22", "text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.", "title": "" } ]
[ { "docid": "cdb937def5a92e3843a761f57278783e", "text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "title": "" }, { "docid": "5cd3abebf4d990bb9196b7019b29c568", "text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.", "title": "" }, { "docid": "3f96a3cd2e3f795072567a3f3c8ccc46", "text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "e9b5dc63f981cc101521d8bbda1847d5", "text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.", "title": "" }, { "docid": "288845120cdf96a20850b3806be3d89a", "text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.", "title": "" }, { "docid": "46ac5e994ca0bf0c3ea5dd110810b682", "text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved", "title": "" }, { "docid": "2aefddf5e19601c8338f852811cebdee", "text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.", "title": "" }, { "docid": "37572963400c8a78cef3cd4a565b328e", "text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.", "title": "" }, { "docid": "9d37260c493c40523c268f6e54c8b4ea", "text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.", "title": "" }, { "docid": "6604a90f21796895300d37cefed5b6fa", "text": "Distributed power system network is going to be complex, and it will require high-speed, reliable and secure communication systems for managing intermittent generation with coordination of centralised power generation, including load control. Cognitive Radio (CR) is highly favourable for providing communications in Smart Grid by using spectrum resources opportunistically. The IEEE 802.22 Wireless Regional Area Network (WRAN) having the capabilities of CR use vacant channels opportunistically in the frequency range of 54 MHz to 862 MHz occupied by TV band. A comprehensive review of using IEEE 802.22 for Field Area Network in power system network using spectrum sensing (CR based communication) is provided in this paper. The spectrum sensing technique(s) at Base Station (BS) and Customer Premises Equipment (CPE) for detecting the presence of incumbent in order to mitigate interferences is also studied. The availability of backup and candidate channels are updated during “Quite Period” for further use (spectrum switching and management) with geolocation capabilities. The use of IEEE 802.22 for (a) radio-scene analysis, (b) channel identification, and (c) dynamic spectrum management are examined for applications in power management.", "title": "" }, { "docid": "e8403145a3d4a8a75348075410683e28", "text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.", "title": "" }, { "docid": "6c92652aa5bab1b25910d16cca697d48", "text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].", "title": "" }, { "docid": "27401a6fe6a1edb5ba116db4bbdc7bcc", "text": "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu/", "title": "" }, { "docid": "8e64738b0d21db1ec5ef0220507f3130", "text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.", "title": "" }, { "docid": "e82e44e851486b557948a63366486fef", "text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.", "title": "" }, { "docid": "bef317c450503a7f2c2147168b3dd51e", "text": "With the development of the Internet of Things (IoT) and the usage of low-powered devices (sensors and effectors), a large number of people are using IoT systems in their homes and businesses to have more control over their technology. However, a key challenge of IoT systems is data protection in case the IoT device is lost, stolen, or used by one of the owner's friends or family members. The problem studied here is how to protect the access to data of an IoT system. To solve the problem, an attribute-based access control (ABAC) mechanism is applied to give the system the ability to apply policies to detect any unauthorized entry. Finally, a prototype was built to test the proposed solution. The evaluation plan was applied on the proposed solution to test the performance of the system.", "title": "" }, { "docid": "3d2e82a0353d0b2803a579c413403338", "text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi", "title": "" }, { "docid": "c3e2ceebd3868dd9fff2a87fdd339dce", "text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.", "title": "" }, { "docid": "583d2f754a399e8446855b165407f6ee", "text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.", "title": "" }, { "docid": "20e5855c2bab00b7f91cca5d7bd07245", "text": "The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine", "title": "" } ]
scidocsrr
5b64d5546765f7ad18ec9b4bda17a71f
Investigation of friction characteristics of a tendon driven wearable robotic hand
[ { "docid": "030b25a7c93ca38dec71b301843c7366", "text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.", "title": "" }, { "docid": "720eccb945faa357bc44c5aa33fe60a9", "text": "The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance", "title": "" } ]
[ { "docid": "fb2ff96dbfe584f450dd19f8d3cea980", "text": "[1] Nondestructive imaging methods such as X-ray computed tomography (CT) yield high-resolution, three-dimensional representations of pore space and fluid distribution within porous materials. Steadily increasing computational capabilities and easier access to X-ray CT facilities have contributed to a recent surge in microporous media research with objectives ranging from theoretical aspects of fluid and interfacial dynamics at the pore scale to practical applications such as dense nonaqueous phase liquid transport and dissolution. In recent years, significant efforts and resources have been devoted to improve CT technology, microscale analysis, and fluid dynamics simulations. However, the development of adequate image segmentation methods for conversion of gray scale CT volumes into a discrete form that permits quantitative characterization of pore space features and subsequent modeling of liquid distribution and flow processes seems to lag. In this paper we investigated the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media. A comparison between directly measured and image-derived porosities clearly demonstrates that the application of different segmentation methods as well as associated operator biases yield vastly differing results. This illustrates the importance of the segmentation step for quantitative pore space analysis and fluid dynamics modeling. Only a few of the tested methods showed promise for both industrial and synchrotron tomography. Utilization of local image information such as spatial correlation as well as the application of locally adaptive techniques yielded significantly better results.", "title": "" }, { "docid": "79a2cc561cd449d8abb51c162eb8933d", "text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.", "title": "" }, { "docid": "7f1eb105b7a435993767e4a4b40f7ed9", "text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo", "title": "" }, { "docid": "83187228617d62fb37f99cf107c7602a", "text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.", "title": "" }, { "docid": "90e6a1fa70ddec11248ba658623d2d6e", "text": "This paper proposes a new technique for grid synchronization under unbalanced and distorted conditions, i.e., the dual second order generalised integrator - frequency-locked loop (DSOGI-FLL). This grid synchronization system results from the application of the instantaneous symmetrical components method on the stationary and orthogonal alphabeta reference frame. The second order generalized integrator concept (SOGI) is exploited to generate in-quadrature signals used on the alphabeta reference frame. The frequency-adaptive characteristic is achieved by a simple control loop, without using either phase-angles or trigonometric functions. In this paper, the development of the DSOGI-FLL is plainly exposed and hypothesis and conclusions are verified by simulation and experimental results", "title": "" }, { "docid": "026408a6ad888ea0bcf298a23ef77177", "text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.", "title": "" }, { "docid": "a0f20c2481aefc3b431f708ade0cc1aa", "text": "Objective Video game violence has become a highly politicized issue for scientists and the general public. There is continuing concern that playing violent video games may increase the risk of aggression in players. Less often discussed is the possibility that playing violent video games may promote certain positive developments, particularly related to visuospatial cognition. The objective of the current article was to conduct a meta-analytic review of studies that examine the impact of violent video games on both aggressive behavior and visuospatial cognition in order to understand the full impact of such games. Methods A detailed literature search was used to identify peer-reviewed articles addressing violent video game effects. Effect sizes r (a common measure of effect size based on the correlational coefficient) were calculated for all included studies. Effect sizes were adjusted for observed publication bias. Results Results indicated that publication bias was a problem for studies of both aggressive behavior and visuospatial cognition. Once corrected for publication bias, studies of video game violence provided no support for the hypothesis that violent video game playing is associated with higher aggression. However playing violent video games remained related to higher visuospatial cognition (r x = 0.36). Conclusions Results from the current analysis did not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing was associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium.", "title": "" }, { "docid": "bf8f46e4c85f7e45879cee4282444f78", "text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.", "title": "" }, { "docid": "5a85c72c5b9898b010f047ee99dba133", "text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.", "title": "" }, { "docid": "4645d0d7b1dfae80657f75d3751ef72a", "text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.", "title": "" }, { "docid": "6e05c3e76e87317db05c43a1f564724a", "text": "Data science or \"data-driven research\" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.", "title": "" }, { "docid": "9db779a5a77ac483bb1991060dca7c28", "text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.", "title": "" }, { "docid": "cbfdea54abb1e4c1234ca44ca6913220", "text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.", "title": "" }, { "docid": "55b95e06bdf28ebd0b6a1e39875635e2", "text": "As the security landscape evolves over time, where thousands of species of malicious codes are seen every day, antivirus vendors strive to detect and classify malware families for efficient and effective responses against malware campaigns. To enrich this effort and by capitalizing on ideas from the social network analysis domain, we build a tool that can help classify malware families using features driven from the graph structure of their system calls. To achieve that, we first construct a system call graph that consists of system calls found in the execution of the individual malware families. To explore distinguishing features of various malware species, we study social network properties as applied to the call graph, including the degree distribution, degree centrality, average distance, clustering coefficient, network density, and component ratio. We utilize features driven from those properties to build a classifier for malware families. Our experimental results show that “influence-based” graph metrics such as the degree centrality are effective for classifying malware, whereas the general structural metrics of malware are less effective for classifying malware. Our experiments demonstrate that the proposed system performs well in detecting and classifying malware families within each malware class with accuracy greater than 96%.", "title": "" }, { "docid": "26f2b200bf22006ab54051c9288420e8", "text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.", "title": "" }, { "docid": "417eff5fd6251c70790d69e2b8dae255", "text": "This paper is a report on the initial trial for its kind in the development of the performance index of the autonomous mobile cleaning robot. The unique characteristic features of the cleaning robot have been identified as autonomous mobility, dust collection, and operation noise. Along with the identification of the performance indices the standardized performance-evaluation methods including the corresponding performance evaluation platform for each indices have been developed as well. The validity of the proposed performance evaluation methods has been demonstrated by applying the proposed evaluation methods on two commercial cleaning robots available in market. The proposed performance evaluation methods can be applied to general-purpose autonomous service robots which will be introduced in the consumer market in near future.", "title": "" }, { "docid": "0f9d6fcd53560c0c0433d64014f2aeb2", "text": "The task of plagiarism detection entails two main steps, suspicious candidate retrieval and pairwise document similarity analysis also called detailed analysis. In this paper we focus on the second subtask. We will report our monolingual plagiarism detection system which is used to process the Persian plagiarism corpus for the task of pairwise document similarity. To retrieve plagiarised passages a plagiarism detection method based on vector space model, insensitive to context reordering, is presented. We evaluate the performance in terms of precision, recall, granularity and plagdet metrics.", "title": "" }, { "docid": "fa851a3828bf6ebf371c49917bab3b4e", "text": "Recent research has documented large di!erences among countries in ownership concentration in publicly traded \"rms, in the breadth and depth of capital markets, in dividend policies, and in the access of \"rms to external \"nance. A common element to the explanations of these di!erences is how well investors, both shareholders and creditors, are protected by law from expropriation by the managers and controlling shareholders of \"rms. We describe the di!erences in laws and the e!ectiveness of their enforcement across countries, discuss the possible origins of these di!erences, summarize their consequences, and assess potential strategies of corporate governance reform. We argue that the legal approach is a more fruitful way to understand corporate governance and its reform than the conventional distinction between bank-centered and market-centered \"nancial systems. ( 2000 Elsevier Science S.A. All rights reserved. JEL classixcation: G21; G28; G32", "title": "" }, { "docid": "9655259173f749134723f98585a254c1", "text": "With the rapid growth of streaming media applications, there has been a strong demand of Quality-of-Experience (QoE) measurement and QoE-driven video delivery technologies. While the new worldwide standard dynamic adaptive streaming over hypertext transfer protocol (DASH) provides an inter-operable solution to overcome the volatile network conditions, its complex characteristic brings new challenges to the objective video QoE measurement models. How streaming activities such as stalling and bitrate switching events affect QoE is still an open question, and is hardly taken into consideration in the traditionally QoE models. More importantly, with an increasing number of objective QoE models proposed, it is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this study, we build two subject-rated streaming video databases. The progressive streaming video database is dedicated to investigate the human responses to the combined effect of video compression, initial buffering, and stalling. The adaptive streaming video database is designed to evaluate the performance of adaptive bitrate streaming algorithms and objective QoE models. We also provide useful insights on the improvement of adaptive bitrate streaming algorithms. Furthermore, we propose a novel QoE prediction approach to account for the instantaneous quality degradation due to perceptual video presentation impairment, the playback stalling events, and the instantaneous interactions between them. Twelve QoE algorithms from four categories including signal fidelity-based, network QoS-based, application QoSbased, and hybrid QoE models are assessed in terms of correlation with human perception", "title": "" } ]
scidocsrr
01c3e01d851d2eea8a3d24dcf1cc9afa
New prototype of hybrid 3D-biometric facial recognition system
[ { "docid": "573f12acd3193045104c7d95bbc89f78", "text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.", "title": "" } ]
[ { "docid": "ac29d60761976a263629a93167516fde", "text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.", "title": "" }, { "docid": "d63591706309cf602404c34de547184f", "text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.", "title": "" }, { "docid": "3ea6de664a7ac43a1602b03b46790f0a", "text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.", "title": "" }, { "docid": "5d21df36697616719bcc3e0ee22a08bd", "text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics", "title": "" }, { "docid": "4c12d10fd9c2a12e56b56f62f99333f3", "text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.", "title": "" }, { "docid": "705b2a837b51ac5e354e1ec0df64a52a", "text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).", "title": "" }, { "docid": "2549177f9367d5641a7fc4dfcfaf5c0a", "text": "Educational data mining is an emerging trend, concerned with developing methods for exploring the huge data that come from the educational system. This data is used to derive the knowledge which is useful in decision making. EDM methods are useful to measure the performance of students, assessment of students and study students’ behavior etc. In recent years, Educational data mining has proven to be more successful at many of the educational statistics problems due to enormous computing power and data mining algorithms. This paper surveys the history and applications of data mining techniques in the educational field. The objective is to introduce data mining to traditional educational system, web-based educational system, intelligent tutoring system, and e-learning. This paper describes how to apply the main data mining methods such as prediction, classification, relationship mining, clustering, and", "title": "" }, { "docid": "9b7ca6e8b7bf87ef61e70ab4c720ec40", "text": "The support vector machine (SVM) is a widely used tool in classification problems. The SVM trains a classifier by solving an optimization problem to decide which instances of the training data set are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training data set, releasing the SVM classifier for public use or shipping the SVM classifier to clients will disclose the private content of support vectors. This violates the privacy-preserving requirements for some legal or commercial reasons. The problem is that the classifier learned by the SVM inherently violates the privacy. This privacy violation problem will restrict the applicability of the SVM. To the best of our knowledge, there has not been work extending the notion of privacy preservation to tackle this inherent privacy violation problem of the SVM classifier. In this paper, we exploit this privacy violation problem, and propose an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors. The postprocessed SVM classifier without exposing the private content of training data is called Privacy-Preserving SVM Classifier (abbreviated as PPSVC). The PPSVC is designed for the commonly used Gaussian kernel function. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the sensitive attribute values possessed by support vectors. By applying the PPSVC, the SVM classifier is able to be publicly released while preserving privacy. We prove that the PPSVC is robust against adversarial attacks. The experiments on real data sets show that the classification accuracy of the PPSVC is comparable to the original SVM classifier.", "title": "" }, { "docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97", "text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.", "title": "" }, { "docid": "641811eac0e8a078cf54130c35fd6511", "text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.", "title": "" }, { "docid": "23bf81699add38814461d5ac3e6e33db", "text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.", "title": "" }, { "docid": "f6dd10d4b400234a28b221d0527e71c0", "text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.", "title": "" }, { "docid": "6fad371eecbb734c1e54b8fb9ae218c4", "text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.", "title": "" }, { "docid": "13bd6515467934ba7855f981fd4f1efd", "text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.", "title": "" }, { "docid": "f28170dcc3c4949c27ee609604c53bc2", "text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.", "title": "" }, { "docid": "c0a75bf3a2d594fb87deb7b9f58a8080", "text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.", "title": "" }, { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "9f9719336bf6497d7c71590ac61a433b", "text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.", "title": "" }, { "docid": "115fb4dcd7d5a1240691e430cd107dce", "text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.", "title": "" } ]
scidocsrr
1b0fabf5c29000d15c6e1b2dd6eba2cc
Photometric stereo and weather estimation using internet images
[ { "docid": "5cfc4911a59193061ab55c2ce5013272", "text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "title": "" }, { "docid": "f085832faf1a2921eedd3d00e8e592db", "text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.", "title": "" }, { "docid": "1b6ddffacc50ad0f7e07675cfe12c282", "text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.", "title": "" } ]
[ { "docid": "362b1a5119733eba058d1faab2d23ebf", "text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:", "title": "" }, { "docid": "f7ce2995fc0369fb8198742a5f1fefa3", "text": "In this paper, we present a novel method for multimodal gesture recognition based on neural networks. Our multi-stream recurrent neural network (MRNN) is a completely data-driven model that can be trained from end to end without domain-specific hand engineering. The MRNN extends recurrent neural networks with Long Short-Term Memory cells (LSTM-RNNs) that facilitate the handling of variable-length gestures. We propose a recurrent approach for fusing multiple temporal modalities using multiple streams of LSTM-RNNs. In addition, we propose alternative fusion architectures and empirically evaluate the performance and robustness of these fusion strategies. Experimental results demonstrate that the proposed MRNN outperforms other state-of-theart methods in the Sheffield Kinect Gesture (SKIG) dataset, and has significantly high robustness to noisy inputs.", "title": "" }, { "docid": "43baeb87f1798d52399ba8c78ffa7fef", "text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-", "title": "" }, { "docid": "97decda9a345d39e814e19818eebe8b8", "text": "In this review article, we present some challenges and opportunities in Ambient Assisted Living (AAL) for disabled and elderly people addressing various state of the art and recent approaches particularly in artificial intelligence, biomedical engineering, and body sensor networking.", "title": "" }, { "docid": "7bea13124037f4e21b918f08c81b9408", "text": "U.S. health care system is plagued by rising cost and limited access. While the cost of care is increasing faster than the rate of inflation, people living in rural areas have very limited access to quality health care due to a shortage of physicians and facilities in these areas. Information and communication technologies in general and telemedicine in particular offer great promise to extend quality care to underserved rural communities at an affordable cost. However, adoption of telemedicine among the various stakeholders of the health care system has not been very encouraging. Based on an analysis of the extant research literature, this study identifies critical factors that impede the adoption of telemedicine, and offers suggestions to mitigate these challenges.", "title": "" }, { "docid": "a2f46b51b65c56acf6768f8e0d3feb79", "text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit", "title": "" }, { "docid": "50dc3186ad603ef09be8cca350ff4d77", "text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.", "title": "" }, { "docid": "50b6f8067784fe4b9b3adf6db17ab4d1", "text": "Available online 23 November 2012", "title": "" }, { "docid": "e3e024fa2ee468fb2a64bfc8ddf69467", "text": "We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.", "title": "" }, { "docid": "f159ee79d20f00194402553758bcd031", "text": "Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.", "title": "" }, { "docid": "a036dd162a23c5d24125d3270e22aaf7", "text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …", "title": "" }, { "docid": "4387549562fe2c0833b002d73d9a8330", "text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.", "title": "" }, { "docid": "9cbd8a5ac00fc940baa63cf0fb4d2220", "text": "— The paper presents a technique for anomaly detection in user behavior in a smart-home environment. Presented technique can be used for a service that learns daily patterns of the user and proactively detects unusual situations. We have identified several drawbacks of previously presented models such as: just one type of anomaly-inactivity, intricate activity classification into hierarchy, detection only on a daily basis. Our novelty approach desists these weaknesses, provides additional information if the activity is unusually short/long, at unusual location. It is based on a semi-supervised clustering model that utilizes the neural network Self-Organizing Maps. The input to the system represents data primarily from presence sensors, however also other sensors with binary output may be used. The experimental study is realized on both synthetic data and areal database collected in our own smart-home installation for the period of two months.", "title": "" }, { "docid": "c751115c128fd0776baf212ae19624ff", "text": "This paper presents a natural language interface to relational database. It introduces some classical NLDBI products and their applications and proposes the architecture of a new NLDBI system including its probabilistic context free grammar, the inside and outside probabilities which can be used to construct the parse tree, an algorithm to calculate the probabilities, and the usage of dependency structures and verb subcategorization in analyzing the parse tree. Some experiment results are given to conclude the paper.", "title": "" }, { "docid": "7d11d25dc6cd2822d7f914b11b7fe640", "text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.", "title": "" }, { "docid": "a23949a678e49a7e1495d98aae3adef2", "text": "The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.", "title": "" }, { "docid": "4b6755737ad43dec49e470220a24236a", "text": "We address the issue of automatically extracting rhythm descriptors from audio signals, to be eventually used in content-based musical applications such as in the context of MPEG7. Our aim is to approach the comprehension of auditory scenes in raw polyphonic audio signals without preliminary source separation. As a first step towards the automatic extraction of rhythmic structures out of signals taken from the popular music repertoire, we propose an approach for automatically extracting time indexes of occurrences of different percussive timbres in an audio signal. Within this framework, we found that a particular issue lies in the classification of percussive sounds. In this paper, we report on the method currently used to deal with this problem.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" } ]
scidocsrr
2398eb8423daf5bcdd1ea7e733399da7
LineScout Technology Opens the Way to Robotic Inspection and Maintenance of High-Voltage Power Lines
[ { "docid": "41e714ba7f26bfab161863b8033d8ffe", "text": "Power line inspection and maintenance is a slowly but surely emerging field for robotics. This paper describes the control scheme implemented in LineScout technology, one of the first teleoperated obstacle crossing systems that has progressed to the stage of actually performing very-high-voltage power line jobs. Following a brief overview of the hardware and software architecture, key challenges associated with the objectives of achieving reliability, robustness and ease of operation are presented. The coordinated control through visual feedback of all motors needed for obstacle crossing calls for a coherent strategy, an effective graphical user interface and rules to ensure safe, predictable operation. Other features such as automatic weight balancing are introduced to lighten the workload and let the operator concentrate on inspecting power line components. Open architecture was considered for progressive improvements. The features required to succeed in making power line robots fully autonomous are also discussed.", "title": "" } ]
[ { "docid": "4afe6e46fb1a0eb825e7485c73edd75e", "text": "Cough is a reflex action of the respiratory tract that is used to clear the upper airways. Chronic cough lasting for more than 8 weeks is common in the community. The causes include cigarette smoking, exposure to cigarette smoke, and exposure to environmental pollution, especially particulates. Diseases causing chronic cough include asthma, eosinophilic bronchitis, gastro-oesophageal reflux disease, postnasal drip syndrome or rhinosinusitis, chronic obstructive pulmonary disease, pulmonary fibrosis, and bronchiectasis. Doctors should always work towards a clear diagnosis, considering common and rare illnesses. In some patients, no cause is identified, leading to the diagnosis of idiopathic cough. Chronic cough is often associated with an increased response to tussive agents such as capsaicin. Plastic changes in intrinsic and synaptic excitability in the brainstem, spine, or airway nerves can enhance the cough reflex, and can persist in the absence of the initiating cough event. Structural and inflammatory airway mucosal changes in non-asthmatic chronic cough could represent the cause or the traumatic response to repetitive coughing. Effective control of cough requires not only controlling the disease causing the cough but also desensitisation of cough pathways.", "title": "" }, { "docid": "6ad8da8198b1f61dfe0dc337781322d9", "text": "A model of human speech quality perception has been developed to provide an objective measure for predicting subjective quality assessments. The Virtual Speech Quality Objective Listener (ViSQOL) model is a signal based full reference metric that uses a spectro-temporal measure of similarity between a reference and a test speech signal. This paper describes the algorithm and compares the results with PESQ for common problems in VoIP: clock drift, associated time warping and jitter. The results indicate that ViSQOL is less prone to underestimation of speech quality in both scenarios than the ITU standard.", "title": "" }, { "docid": "0197bfeb753c9be004a1a091a12fa1dc", "text": "Correlation filter (CF) based trackers generally include two modules, i.e., feature representation and on-line model adaptation. In existing off-line deep learning models for CF trackers, the model adaptation usually is either abandoned or has closed-form solution to make it feasible to learn deep representation in an end-to-end manner. However, such solutions fail to exploit the advances in CF models, and cannot achieve competitive accuracy in comparison with the state-of-the-art CF trackers. In this paper, we investigate the joint learning of deep representation and model adaptation, where an updater network is introduced for better tracking on future frame by taking current frame representation, tracking result, and last CF tracker as input. By modeling the representor as convolutional neural network (CNN), we truncate the alternating direction method of multipliers (ADMM) and interpret it as a deep network of updater, resulting in our model for learning representation and truncated inference (RTINet). Experiments demonstrate that our RTINet tracker achieves favorable tracking accuracy against the state-of-the-art trackers and its rapid version can run at a real-time speed of 24 fps. The code and pre-trained models will be publicly available at https://github.com/tourmaline612/RTINet.", "title": "" }, { "docid": "99d612ac042f1c2f930b7310e6308946", "text": "Live streaming services are a growing form of social media. Most live streaming platforms allow viewers to communicate with each other and the broadcaster via a text chat. However, interaction in a text chat does not work well with too many users. Existing techniques to make text chat work with a larger number of participants often limit who can participate or how much users can participate. In this paper, we describe a new design for a text chat system that allows more people to participate without overwhelming users with too many messages. Our design strategically limits the number of messages a user sees based on the concept of neighborhoods, and emphasizes important messages through upvoting. We present a study comparing our system to a chat system similar to those found in commercial streaming services. Results of the study indicate that the Conversational Circle system is easier to understand and interact with, while supporting community among viewers and highlighting important content for the streamer.", "title": "" }, { "docid": "c09256d7daaff6e2fc369df0857a3829", "text": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.", "title": "" }, { "docid": "82d62feaa0c88789c44bbdc745ab21dc", "text": "This paper proposes a new approach to solve the problem of real-time vision-based hand gesture recognition with the combination of statistical and syntactic analyses. The fundamental idea is to divide the recognition problem into two levels according to the hierarchical property of hand gestures. The lower level of the approach implements the posture detection with a statistical method based on Haar-like features and the AdaBoost learning algorithm. With this method, a group of hand postures can be detected in real time with high recognition accuracy. The higher level of the approach implements the hand gesture recognition using the syntactic analysis based on a stochastic context-free grammar. The postures that are detected by the lower level are converted into a sequence of terminal strings according to the grammar. Based on the probability that is associated with each production rule, given an input string, the corresponding gesture can be identified by looking for the production rule that has the highest probability of generating the input string.", "title": "" }, { "docid": "7f51bdc05c4a1bf610f77b629d8602f7", "text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.", "title": "" }, { "docid": "a5ce24236867a513a19d98bd46bf99d2", "text": "The mandala thangka, as a religious art in Tibetan Buddhism, is an invaluable cultural and artistic heritage. However, drawing a mandala is both time and effort consuming and requires mastery skills due to its intricate details. Retaining and digitizing this heritage is an unresolved research challenge to date. In this paper, we propose a computer-aided generation approach of mandala thangka patterns to address this issue. Specifically, we construct parameterized models of three stylistic patterns used in the interior mandalas of Nyingma school in Tibetan Buddhism according to their geometric features, namely the star, crescent and lotus flower patterns. Varieties of interior mandalas are successfully generated using these proposed patterns based on the hierarchical structures observed from hand drawn mandalas. The experimental results show that our approach can efficiently generate beautifully-layered colorful interior mandalas, which significantly reduces the time and efforts in manual production and, more importantly, contributes to the digitization of this great heritage.", "title": "" }, { "docid": "57aaa47e45e8542767e327cf683288cf", "text": "Mobile edge computing usually uses caching to support multimedia contents in 5G mobile Internet to reduce the computing overhead and latency. Mobile edge caching (MEC) systems are vulnerable to various attacks such as denial of service attacks and rogue edge attacks. This article investigates the attack models in MEC systems, focusing on both the mobile offloading and the caching procedures. In this article, we propose security solutions that apply reinforcement learning (RL) techniques to provide secure offloading to the edge nodes against jamming attacks. We also present lightweight authentication and secure collaborative caching schemes to protect data privacy. We evaluate the performance of the RL-based security solution for mobile edge caching and discuss the challenges that need to be addressed in the future.", "title": "" }, { "docid": "9faf99899f00e1f0cad97bb27be026d4", "text": "The High-Temperature Winkler (HTW) process developed by Rheinbraun is a fluidised-bed gasification process particularly suitable for various types of lignite, other reactive and ballast-rich coal types, biomass and different types of pre-treated residual waste. Depending on the application, the HTW process can be used for efficient conversion of these feedstocks to produce fuel gas, reduction gas or synthesis gas. The co-gasification of pre-treated municipal solid waste (of differing origins) and lignite was demonstrated in a commercial scale during normal production in the HTW demonstration plant at Berrenrath, Germany. Approx. 1,000 metric tons of pre-treated municipal solid waste was gasified without any problems together with dried lignite in the three test campaigns. The gasifier operated to the full satisfaction of all partners throughout the test campaigns. The demonstration project yielded useful operating experience and process engineering data (e.g. energy and mass balances, gas composition, emission estimates) and provided engineering reliability for the design of future plants and an important reference for further applications, most recently for MSW gasification in Japan. The Krupp Uhde PreCon process applies the High-Temperature-Winkler (HTW) gasification as a core technology for processing solid wastes, e.g. municipal solid waste, sewage sludge, auto shredder residue or residues from plastic recycling processes. The modules used are based on those used in mechanical pre-treatment and coal gasification being tested successfully in commercial plants for several years.", "title": "" }, { "docid": "8dfa105c269696ac8ef38feb5337396f", "text": "The use of multisensory approaches to reading and literacy instruction has proven not only beneficial but also pleasantly stimulating for students as well. The approach is especially valuable for students that are underachieving or have special needs; in which these types of students may have more learning ability obstacles than their peers. Multisensory lessons will prove useful to any population in order to help achieve the desired goal of any unit. Moreover, educators can also gain positive experiences from using multisensory methods with their students to insure an interactive, fun and beneficial alternative to traditional teaching of reading and literacy. Using Multisensory Methods in Reading and Literacy Instruction Learning how to read is the foundation of elementary education in which all young children will either learn with ease, or with difficulty and hesitation. Reading requires the memorization of phonemes, sight words and high frequency words in order to decode texts; and through active experiences, children construct their understanding of the world (Gunning, 2009). Being active learners in the classroom can come from many methods such as hands on, musical or a kinesthetic approach to instruction. According to Smolkin and Donovan (2003), comprehension-related activities need not wait until children are fluently decoding but may be used during comprehension acquisition. This means that in this stage, students can use multisensory methods to begin decoding grade appropriate texts even before they begin to read. This literature review examines the use of multisensory methods on students that are beginning to read and learn from literacy instruction. Learning Through The Senses: Below Grade Level Students In most cases, beginning readers will be taught different strategies using body movements, songs and rhymes in order to memorize the alphabet or learn phonics. Using a multisensory teaching approach means helping a child to learn through more than one of the senses (Bradford, 2008). Teachers unknowingly have always used methods to teach initial readers that require the different senses including, sight, hearing, touch, taste and even smell (Greenwell & Zygouris-Coe, 2012). Therefore, rather than offer more reading strategy instruction, teachers must offer a different kind of instruction—instruction that defines reading strategies as a set of resources for exploring both written texts and the texts of students’ lived realities (Park, 2012). Different approaches to reading instruction that include multisensory instructional approaches can be used on all types of students including under or over achieving students, special needs and English language learner students. A recent study conducted by Folakemi and Adebayo (2012) investigated the effects of multisensory in comparison to metacognitive instructional approaches on vocabulary of underachieving Nigerian secondary school students. The multisensory approach was tested against the metacognitive instruction approach on vocabulary amongst one hundred and twenty students, sixty male and sixty female. The investigation took place in an Ilorin, Nigeria secondary school in which only underachieving students who consistently scored below 40% in English language were selected for the study (Folakemi & Adebayo, 2012). The researches hypothesized students that underachieve will need more attention compared to their overachieving counterparts. They noticed throughout the experiment that although the less able students are still fully capable of learning, they have difficulties and all too often give up easily and soon become disillusioned. The interest in using a multisensory approach to combat underachieving students stems from noticing not only the teacher’s dull attitude, but in the student’s attitude toward traditional instructional approaches. Most teachers have failed to see the importance of using teaching aids, which can be used for presentation, practice, revision, and testing in the ESL classroom. Students’ interest is killed because they are bored with the traditional ‘talk and board’ teaching approach (Folakemi & Adebayo, 2012). Teaching efforts needed to be directed towards this set of students in which multisensory methods can have the potential to give students the tools needed to learn through the different senses. In the study, the students were separated into four levels of independent and dependent variables of treatment and control (Folakemi & Adebayo, 2012). Different control groups in which one group was taught vocabulary using the multisensory approach and another group was taught using metacognition instruction approaches were investigated in order to come to a conclusion. The researchers hypothesized that for the under achiever students, English language teachers would need an explicit and distinctive multisensory approach to teach them (Folakemi & Adebayo, 2012). They included textbooks, video, audiotapes, computer software and visual aids to provide support for the underachieving students. These manipulatives were used during class instruction time when teaching English language arts and most exclusively, vocabulary lessons. In order to test their findings, the researches used a variety of tests to collect data for the investigation; the study was conducted into several stages. Stage one is the pretest and stage two is the administration of the test while stage three included a posttest. All the 120 subjects selected for the study are divided into the three experimental and one control group, they all took part in the two tests. The test consisted of one hundred questions, twenty questions for each vocabulary dimension while each of the experimental teachers was attached to a particular group of underachievers. The results indicated that: “MSIA (Multisensory Instructional Approach) is the most effective, followed by MCIA (Metacognitive Instructional Approach) and MSIA+MCIA. This means that the three approaches are more effective than the conventional approach. Therefore, significant difference exists between the three instructional approaches and the conventional instructional approach. This result indicates that the multisensory instructional approaches had significant effect on students spelling achievement of the underachieving students” (Folakemi & Adebayo, 2012, p. 21). The significant difference in the overall achievement in English vocabulary of the underachieving students using the four instructional approaches concluded that the three experimental groups performed significantly better than the control group with the multisensory instructional approach group performing best (Folakemi & Adebayo, 2012). These results in regards to multisensory instruction positively affect how a student learns and is becoming a more widely used tactic within the classroom. In 2007, Wendy Johnson Donnell also conducted an experiment in which she tested the effects of multisensory instructional methods in underachieving third grade students. According Bowey (1995), children from lower socioeconomic groups and minority groups tend to be further behind their peers in early literacy skills on kindergarten entry and that this gap increases over time. This gap sets up these students to be behind in their schooling and potentially become underachieving as the curriculum becomes more rigorous. Donnell’s (2012) study focuses on students coming from a low-income area to test the effects of multisensory lessons within the classroom. Before the study was conducted, she studied students at several elementary schools in the Kansas City, Kansas area. Reading records and written work of the third grade students were analyzed to come to a conclusion that, “an obstacle to reading success for many children in the third grade was automaticity in the application of the alphabetic principle, specifically vowels” (Donnell, 2007, p. 469). After reaching this pre-research conclusion, Donnell decided to research a multisensory instruction in a whole-class setting. The study consisted of using 60 whole-class multisensory word study lessons for third grade students; each of the lessons took approximately 20 minutes for a total of 20 hours instruction inside the classroom. The lessons varied from children’s oral language, to phonological and phonemic awareness, to phonics, to specific vowel-spelling patterns. Because the district the research was being conducted in already adapted the Animal Literacy program, the lessons were built to incorporate Animal Literacy. The multisensory features of the word-study lessons are both receptive and productive, with auditory, visual, and kinesthetic components (Donnell, 2007). With each lesson requiring these components, individual lesson plans were developed to target a specific purpose such as phonics or phonemic awareness to insure that a level of commitment to memory was supported. During the experiment, the study required that all 450 participating third graders all stemmed from the same district where the socioeconomic status was similar in all participating elementary schools. A uniformed population was a key component in researching the multisensory lesson plans within the classrooms. Another key component in the research was providing all the contributing teachers that were going to incorporate these independent variable multisensory lesson plans with preparation and guidance during the research as well as being taught how to distribute tests. The dependent variables, tests used within the research, included the Names Test, Elementary Spelling Inventory, Dynamic Indicators of Basic Early Literacy Skills and Oral Reading Fluency assessments. To test reading comprehension, the Scholastic Reading Inventory Interactive was used as well. After the all dependent variable tests were given, teachers collected the assessments in order to compare student results. (Donnell, 2007). The results that developed from the research indicate that", "title": "" }, { "docid": "2c442933c4729e56e5f4f46b5b8071d6", "text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.", "title": "" }, { "docid": "fa086058ad67602b9b4429f950e70c0f", "text": "The Telecare Medicine Information System (TMIS) has brought us a lot of conveniences. However, it may also reveal patients’ privacies and other important information. So the security of TMIS can be paid much attention to, in which identity authentication plays a very important role in protecting TMIS from being illegally used. To improve the situation, TMIS needs a more secure and more efficient authentication scheme. Recently, Yan and Li et al. have proposed a secure authentication scheme for the TMIS based on biometrics, claiming that it can withstand various attacks. In this paper, we present several security problems in their scheme as follows: (a) it cannot really achieve three-factor authentication; (b) it has design flaws at the password change phase; (c) users’ biometric may be locked out; (d) it fails to achieve users’ anonymous identity. To solve these problems, a new scheme using the theory of Secure Sketch is proposed. The thorough analysis shows that our scheme can provide a stronger security than Yan-Li’s protocol, despite the little higher computation cost at client. What’s more, the proposed scheme not only can achieve anonymity preserving but also can achieve session key agreement.", "title": "" }, { "docid": "b322d03c7f1fc90f03dd9c76047c5a32", "text": "We develop a probabilistic technique for colorizing grayscale natural images. In light of the intrinsic uncertainty of this task, the proposed probabilistic framework has numerous desirable properties. In particular, our model is able to produce multiple plausible and vivid colorizations for a given grayscale image and is one of the first colorization models to provide a proper stochastic sampling scheme. Moreover, our training procedure is supported by a rigorous theoretical framework that does not require any ad hoc heuristics and allows for efficient modeling and learning of the joint pixel color distribution. We demonstrate strong quantitative and qualitative experimental results on the CIFAR-10 dataset and the challenging ILSVRC 2012 dataset.", "title": "" }, { "docid": "bcdb0e6dcbab8fcccfea15edad00a761", "text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.", "title": "" }, { "docid": "e380fee1d044c15a5e5ba12436b8f511", "text": "Modern resolver-to-digital converters (RDCs) are typically implemented using DSP techniques to reduce hardware footprint and enhance system accuracy. However, in such implementations, both resolver sensor and ADC channel unbalances introduce significant errors, particularly in the speed output of the tracking loop. The frequency spectrum of the output error is variable depending on the resolver mechanical velocity. This paper presents the design of an autotuning output filter based on the interpolation of precomputed filters for a DSP-based RDC with a type-II tracking loop. A fourth-order peak and a second-order high-pass filter are designed and tested for an experimental RDC. The experimental results demonstrate significant reduction of the peak-to-peak error in the estimated speed.", "title": "" }, { "docid": "bbbbe3f926de28d04328f1de9bf39d1a", "text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.", "title": "" }, { "docid": "e9497a16e9d12ea837c7a0ec44d71860", "text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.", "title": "" }, { "docid": "e8d102a7b00f81cefc4b1db043a041f8", "text": "Microelectrode measurements can be used to investigate both the intracellular pools of ions and membrane transport processes of single living cells. Microelectrodes can report these processes in the surface layers of root and leaf cells of intact plants. By careful manipulation of the plant, a minimum of disruption is produced and therefore the information obtained from these measurements most probably represents the 'in vivo' situation. Microelectrodes can be used to assay for the activity of particular transport systems in the plasma membrane of cells. Compartmental concentrations of inorganic metabolite ions have been measured by several different methods and the results obtained for the cytosol are compared. Ion-selective microelectrodes have been used to measure the activities of ions in the apoplast, cytosol and vacuole of single cells. New sensors for these microelectrodes are being produced which offer lower detection limits and the opportunity to measure other previously unmeasured ions. Measurements can be used to determine the intracellular steady-state activities or report the response of cells to environmental changes.", "title": "" }, { "docid": "d79db7b7ca4e54fe3aa768669f5ba705", "text": "Customers can participate in open innovation communities posting innovation ideas, which in turn can receive comments and votes from the rest of the community, highlighting user preferences. However, the final decision about implementing innovations corresponds to the company. This paper is focused on the customers’ activity in open innovation communities. The aim is to identify the main topics of customers’ interests in order to compare these topics with managerial decision-making. The results obtained reveal first that both votes and comments can be used to predict user preferences; and second, that customers tend to promote those innovations by reporting more comfort and benefits. In contrast, managerial decisions are more focused on the distinctive features associated with the brand image.", "title": "" } ]
scidocsrr
d1b33ce49666fa755a6cd629a1faaf25
Simplified modeling and identification approach for model-based control of parallel mechanism robot leg
[ { "docid": "69e381983f7af393ee4bbb62bb587a4e", "text": "This paper presents the design principles for highly efficient legged robots, the implementation of the principles in the design of the MIT Cheetah, and the analysis of the high-speed trotting experimental results. The design principles were derived by analyzing three major energy-loss mechanisms in locomotion: heat losses from the actuators, friction losses in transmission, and the interaction losses caused by the interface between the system and the environment. Four design principles that minimize these losses are discussed: employment of high torque-density motors, energy regenerative electronic system, low loss transmission, and a low leg inertia. These principles were implemented in the design of the MIT Cheetah; the major design features are large gap diameter motors, regenerative electric motor drivers, single-stage low gear transmission, dual coaxial motors with composite legs, and the differential actuated spine. The experimental results of fast trotting are presented; the 33-kg robot runs at 22 km/h (6 m/s). The total power consumption from the battery pack was 973 W and resulted in a total cost of transport of 0.5, which rivals running animals' at the same scale. 76% of the total energy consumption is attributed to heat loss from the motor, and the remaining 24% is used in mechanical work, which is dissipated as interaction loss as well as friction losses at the joint and transmission.", "title": "" } ]
[ { "docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc", "text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.", "title": "" }, { "docid": "c4256017c214eabda8e5b47c604e0e49", "text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.", "title": "" }, { "docid": "386af0520255ebd048cff30961973624", "text": "We present a linear optical receiver realized on 130 nm SiGe BiCMOS. Error-free operation assuming FEC is shown at bitrates up to 64 Gb/s (32 Gbaud) with 165mW power consumption, corresponding to 2.578 pJ/bit.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "ba87ca7a07065e25593e6ae5c173669d", "text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.", "title": "" }, { "docid": "51fec678a2e901fdf109d4836ef1bf34", "text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.", "title": "" }, { "docid": "a774567d957ed0ea209b470b8eced563", "text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.", "title": "" }, { "docid": "b5dc56272d4dea04b756a8614d6762c9", "text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.", "title": "" }, { "docid": "9500dfc92149c5a808cec89b140fc0c3", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "a1bf728c54cec3f621a54ed23a623300", "text": "Machine learning algorithms are now common in the state-ofthe-art spoken language understanding models. But to reach good performance they must be trained on a potentially large amount of data which are not available for a variety of tasks and languages of interest. In this work, we present a novel zero-shot learning method, based on word embeddings, allowing to derive a full semantic parser for spoken language understanding. No annotated in-context data are needed, the ontological description of the target domain and generic word embedding features (learned from freely available general domain data) suffice to derive the model. Two versions are studied with respect to how the model parameters and decoding step are handled, including an extension of the proposed approach in the context of conditional random fields. We show that this model, with very little supervision, can reach instantly performance comparable to those obtained by either state-of-the-art carefully handcrafted rule-based or trained statistical models for extraction of dialog acts on the Dialog State Tracking test datasets (DSTC2 and 3).", "title": "" }, { "docid": "9941cd183e2c7b79d685e0e9cef3c43e", "text": "We present a novel recursive Bayesian method in the DFT-domain to address the multichannel acoustic echo cancellation problem. We model the echo paths between the loudspeakers and the near-end microphone as a multichannel random variable with a first-order Markov property. The incorporation of the near-end observation noise, in conjunction with the multichannel Markov model, leads to a multichannel state-space model. We derive a recursive Bayesian solution to the multichannel state-space model, which turns out to be well suited for input signals that are not only auto-correlated but also cross-correlated. We show that the resulting multichannel state-space frequency-domain adaptive filter (MCSSFDAF) can be efficiently implemented due to the submatrix-diagonality of the state-error covariance. The filter offers optimal tracking and robust adaptation in the presence of near-end noise and echo path variability.", "title": "" }, { "docid": "433e7a8c4d4a16f562f9ae112102526e", "text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.", "title": "" }, { "docid": "7c13132ef5b2d67c4a7e3039db252302", "text": "Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses’ revenue, even 0.1% of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9% AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training.", "title": "" }, { "docid": "1d3007738c259cdf08f515849c7939b8", "text": "Background: With an increase in the number of disciplines contributing to health literacy scholarship, we sought to explore the nature of interdisciplinary research in the field. Objective: This study sought to describe disciplines that contribute to health literacy research and to quantify how disciplines draw from and contribute to an interdisciplinary evidence base, as measured by citation networks. Methods: We conducted a literature search for health literacy articles published between 1991 and 2015 in four bibliographic databases, producing 6,229 unique bibliographic records. We employed a scientometric tool (CiteSpace [Version 4.4.R1]) to quantify patterns in published health literacy research, including a visual path from cited discipline domains to citing discipline domains. Key Results: The number of health literacy publications increased each year between 1991 and 2015. Two spikes, in 2008 and 2013, correspond to the introduction of additional subject categories, including information science and communication. Two journals have been cited more than 2,000 times—the Journal of General Internal Medicine (n = 2,432) and Patient Education and Counseling (n = 2,252). The most recently cited journal added to the top 10 list of cited journals is the Journal of Health Communication (n = 989). Three main citation paths exist in the health literacy data set. Articles from the domain “medicine, medical, clinical” heavily cite from one domain (health, nursing, medicine), whereas articles from the domain “psychology, education, health” cite from two separate domains (health, nursing, medicine and psychology, education, social). Conclusions: Recent spikes in the number of published health literacy articles have been spurred by a greater diversity of disciplines contributing to the evidence base. However, despite the diversity of disciplines, citation paths indicate the presence of a few, self-contained disciplines contributing to most of the literature, suggesting a lack of interdisciplinary research. To address complex and evolving challenges in the health literacy field, interdisciplinary team science, that is, integrating science from across multiple disciplines, should continue to grow. [Health Literacy Research and Practice. 2017;1(4):e182-e191.] Plain Language Summary: The addition of diverse disciplines conducting health literacy scholarship has spurred recent spikes in the number of publications. However, citation paths suggest that interdisciplinary research can be strengthened. Findings directly align with the increasing emphasis on team science, and support opportunities and resources that incentivize interdisciplinary health literacy research. The study of health literacy has significantly expanded over the past decade. It represents a dynamic area of inquiry that extends to multiple disciplines. Health literacy emerged as a derivative of literacy and early definitions focused on the ability to read and understand medical instructions and health care information (Parker, Baker, Williams, & Nurss, 1995; Williams et al., 1995). This early work led to a body of research demonstrating that people with low health literacy generally had poorer health outcomes, including lower levels of screening and medication adherence rates (Baker,", "title": "" }, { "docid": "cdc276a3c4305d6c7ba763332ae933cc", "text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.", "title": "" }, { "docid": "b52cadf9e20eebfd388c09c51cff2d74", "text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.", "title": "" }, { "docid": "6e0877f16e624bef547f76b80278f760", "text": "The importance of storytelling as the foundation of human experiences cannot be overestimated. The oral traditions focus upon educating and transmitting knowledge and skills and also evolved into one of the earliest methods of communicating scientific discoveries and developments. A wide ranging search of the storytelling, education and health-related literature encompassing the years 1975-2007 was performed. Evidence from disparate elements of education and healthcare were used to inform an exploration of storytelling. This conceptual paper explores the principles of storytelling, evaluates the use of storytelling techniques in education in general, acknowledges the role of storytelling in healthcare delivery, identifies some of the skills learned and benefits derived from storytelling, and speculates upon the use of storytelling strategies in nurse education. Such stories have, until recently been harvested from the experiences of students and of educators, however, there is a growing realization that patients and service users are a rich source of healthcare-related stories that can affect, change and benefit clinical practice. The use of technology such as the Internet discussion boards or digitally-facilitated storytelling has an evolving role in ensuring that patient-generated and experiential stories have a future within nurse education.", "title": "" }, { "docid": "64770c350dc1d260e24a43760d4e641b", "text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.", "title": "" }, { "docid": "76eef8117ac0bc5dbb0529477d10108d", "text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).", "title": "" }, { "docid": "32b96d4d23a03b1828f71496e017193e", "text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.", "title": "" } ]
scidocsrr
c4eedc71b62029bcf2f2c6bd4bfdd969
The evolutionary psychology of facial beauty.
[ { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" }, { "docid": "1fc10d626c7a06112a613f223391de26", "text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …", "title": "" }, { "docid": "6b6943e2b263fa0d4de934e563a6cc39", "text": "Average faces are attractive, but what is average depends on experience. We examined the effect of brief exposure to consistent facial distortions on what looks normal (average) and what looks attractive. Adaptation to a consistent distortion shifted what looked most normal, and what looked most attractive, toward that distortion. These normality and attractiveness aftereffects occurred when the adapting and test faces differed in orientation by 90 degrees (+45 degrees vs. -45 degrees ), suggesting adaptation of high-level neurons whose coding is not strictly retino- topic. Our results suggest that perceptual adaptation can rapidly recalibrate people's preferences to fit the faces they see. The results also suggest that average faces are attractive because of their central location in a distribution of faces (i.e., prototypicality), rather than because of any intrinsic appeal of particular physical characteristics. Recalibration of preferences may have important consequences, given the powerful effects of perceived attractiveness on person perception, mate choice, social interactions, and social outcomes for individuals.", "title": "" } ]
[ { "docid": "3b988fe1c91096f67461dc9fc7bb6fae", "text": "The paper analyzes the test setup required by the International Electrotechnical Commission (IEC) 61000-4-4 to evaluate the immunity of electronic equipment to electrical fast transients (EFTs), and proposes an electrical model of the capacitive coupling clamp, which is employed to add disturbances to nominal signals. The study points out limits on accuracy of this model, and shows how it can be fruitfully employed to predict the interference waveform affecting nominal system signals through computer simulations.", "title": "" }, { "docid": "85eb1b34bf15c6b5dcd8778146bfcfca", "text": "A novel face recognition algorithm is presented in this paper. Histogram of Oriented Gradient features are extracted both for the test image and also for the training images and given to the Support Vector Machine classifier. The detailed steps of HOG feature extraction and the classification using SVM is presented. The algorithm is compared with the Eigen feature based face recognition algorithm. The proposed algorithm and PCA are verified using 8 different datasets. Results show that in all the face datasets the proposed algorithm shows higher face recognition rate when compared with the traditional Eigen feature based face recognition algorithm. There is an improvement of 8.75% face recognition rate when compared with PCA based face recognition algorithm. The experiment is conducted on ORL database with 2 face images for testing and 8 face images for training for each person. Three performance curves namely CMC, EPC and ROC are considered. The curves show that the proposed algorithm outperforms when compared with PCA algorithm. IndexTerms: Facial features, Histogram of Oriented Gradients, Support Vector Machine, Principle Component Analysis.", "title": "" }, { "docid": "ebedc7f86c7a424091777f360f979122", "text": "Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites. Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.", "title": "" }, { "docid": "1df39d26ed1d156c1c093d7ffd1bb5bf", "text": "Contemporary advances in addiction neuroscience have paralleled increasing interest in the ancient mental training practice of mindfulness meditation as a potential therapy for addiction. In the past decade, mindfulness-based interventions (MBIs) have been studied as a treatment for an array addictive behaviors, including drinking, smoking, opioid misuse, and use of illicit substances like cocaine and heroin. This article reviews current research evaluating MBIs as a treatment for addiction, with a focus on findings pertaining to clinical outcomes and biobehavioral mechanisms. Studies indicate that MBIs reduce substance misuse and craving by modulating cognitive, affective, and psychophysiological processes integral to self-regulation and reward processing. This integrative review provides the basis for manifold recommendations regarding the next wave of research needed to firmly establish the efficacy of MBIs and elucidate the mechanistic pathways by which these therapies ameliorate addiction. Issues pertaining to MBI treatment optimization and sequencing, dissemination and implementation, dose-response relationships, and research rigor and reproducibility are discussed.", "title": "" }, { "docid": "fb809c5e2a15a49a449a818a1b0d59a5", "text": "Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of \"quiet wakefulness\" were characterized by state fluctuations on a timescale of 1-2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.", "title": "" }, { "docid": "39c597ee9c9d9392e803aedeeeb28de9", "text": "BACKGROUND\nApalutamide, a competitive inhibitor of the androgen receptor, is under development for the treatment of prostate cancer. We evaluated the efficacy of apalutamide in men with nonmetastatic castration-resistant prostate cancer who were at high risk for the development of metastasis.\n\n\nMETHODS\nWe conducted a double-blind, placebo-controlled, phase 3 trial involving men with nonmetastatic castration-resistant prostate cancer and a prostate-specific antigen doubling time of 10 months or less. Patients were randomly assigned, in a 2:1 ratio, to receive apalutamide (240 mg per day) or placebo. All the patients continued to receive androgen-deprivation therapy. The primary end point was metastasis-free survival, which was defined as the time from randomization to the first detection of distant metastasis on imaging or death.\n\n\nRESULTS\nA total of 1207 men underwent randomization (806 to the apalutamide group and 401 to the placebo group). In the planned primary analysis, which was performed after 378 events had occurred, median metastasis-free survival was 40.5 months in the apalutamide group as compared with 16.2 months in the placebo group (hazard ratio for metastasis or death, 0.28; 95% confidence interval [CI], 0.23 to 0.35; P<0.001). Time to symptomatic progression was significantly longer with apalutamide than with placebo (hazard ratio, 0.45; 95% CI, 0.32 to 0.63; P<0.001). The rate of adverse events leading to discontinuation of the trial regimen was 10.6% in the apalutamide group and 7.0% in the placebo group. The following adverse events occurred at a higher rate with apalutamide than with placebo: rash (23.8% vs. 5.5%), hypothyroidism (8.1% vs. 2.0%), and fracture (11.7% vs. 6.5%).\n\n\nCONCLUSIONS\nAmong men with nonmetastatic castration-resistant prostate cancer, metastasis-free survival and time to symptomatic progression were significantly longer with apalutamide than with placebo. (Funded by Janssen Research and Development; SPARTAN ClinicalTrials.gov number, NCT01946204 .).", "title": "" }, { "docid": "68612f23057840e01bec9673c5d31865", "text": "The current status of studies of online shopping attitudes and behavior is investigated through an analysis of 35 empirical articles found in nine primary Information Systems (IS) journals and three major IS conference proceedings. A taxonomy is developed based on our analysis. A conceptual model of online shopping is presented and discussed in light of existing empirical studies. Areas for further research are discussed.", "title": "" }, { "docid": "dd66e07814419e3c2515d882d662df93", "text": "Excess body weight (adiposity) and physical inactivity are increasingly being recognized as major nutritional risk factors for cancer, and especially for many of those cancer types that have increased incidence rates in affluent, industrialized parts of the world. In this review, an overview is presented of some key biological mechanisms that may provide important metabolic links between nutrition, physical activity and cancer, including insulin resistance and reduced glucose tolerance, increased activation of the growth hormone/IGF-I axis, alterations in sex-steroid synthesis and/or bioavailability, and low-grade chronic inflammation through the effects of adipokines and cytokines.", "title": "" }, { "docid": "46c8336f395d04d49369d406f41b0602", "text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.", "title": "" }, { "docid": "3d56d2c4b3b326bc676536d35b4bd77f", "text": "In this work an experimental study about the capability of the LBP, HOG descriptors and color for clothing attribute classification is presented. Two different variants of the LBP descriptor are considered, the original LBP and the uniform LBP. Two classifiers, Linear SVM and Random Forest, have been included in the comparison because they have been frequently used in clothing attributes classification. The experiments are carried out with a public available dataset, the clothing attribute dataset, that has 26 attributes in total. The obtained accuracies are over 75% in most cases, reaching 80% for the necktie or sleeve length attributes.", "title": "" }, { "docid": "f7a1624a4827e95b961eb164022aa2a2", "text": "Mitotic chromosome condensation, sister chromatid cohesion, and higher order folding of interphase chromatin are mediated by condensin and cohesin, eukaryotic members of the SMC (structural maintenance of chromosomes)-kleisin protein family. Other members facilitate chromosome segregation in bacteria [1]. A hallmark of these complexes is the binding of the two ends of a kleisin subunit to the apices of V-shaped Smc dimers, creating a tripartite ring capable of entrapping DNA (Figure 1A). In addition to creating rings, kleisins recruit regulatory subunits. One family of regulators, namely Kite dimers (Kleisin interacting winged-helix tandem elements), interact with Smc-kleisin rings from bacteria, archaea and the eukaryotic Smc5-6 complex, but not with either condensin or cohesin [2]. These instead possess proteins containing HEAT (Huntingtin/EF3/PP2A/Tor1) repeat domains whose origin and distribution have not yet been characterized. Using a combination of profile Hidden Markov Model (HMM)-based homology searches, network analysis and structural alignments, we identify a common origin for these regulators, for which we propose the name Hawks, i.e. HEAT proteins associated with kleisins.", "title": "" }, { "docid": "3f88c453eab8b2fbfffbf98fee34d086", "text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.", "title": "" }, { "docid": "785a6d08ef585302d692864d09b026fe", "text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "5724b84f9c00c503066bd6a178664c3c", "text": "A simple quantitative model is presented that is consistent with the available evidence about the British economy during the early phase of the Industrial Revolution. The basic model is a variant of a standard growth model, calibrated to data from Great Britain for the period 1780-1850. The model is used to study the importance of foreign trade and the role of the declining cost of power during this period. The British Industrial Revolution was an amazing episode, with economic consequences that changed the world. But our understanding of the economic events of this ¤Research Department, Federal Reserve Bank of Minneapolis, and Department of Economics, University of Chicago. I am grateful to Matthias Doepke for many stimulating conversations, as well as several useful leads on data sources. I also owe more than the ususal thanks to Joel Mokyr for many helpful comments, including several that changed the direction of the paper in a fundamental way. Finally, I am grateful to the Research Division of Federal Reserve Bank of Minneapolis for support while much of this work was done. This paper is being prepared for the Carnegie-Rochester conference in November, 2000.", "title": "" }, { "docid": "567d165eb9ad5f9860f3e0602cbe3e03", "text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.", "title": "" }, { "docid": "4706f9e8d9892543aaeb441c45816b24", "text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.", "title": "" }, { "docid": "b49e61ecb2afbaa8c3b469238181ec26", "text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.", "title": "" }, { "docid": "ef49eeb766313743edb77f8505e491a0", "text": "In 1998, a clinical classification of pulmonary hypertension (PH) was established, categorizing PH into groups which share similar pathological and hemodynamic characteristics and therapeutic approaches. During the 5th World Symposium held in Nice, France, in 2013, the consensus was reached to maintain the general scheme of previous clinical classifications. However, modifications and updates especially for Group 1 patients (pulmonary arterial hypertension [PAH]) were proposed. The main change was to withdraw persistent pulmonary hypertension of the newborn (PPHN) from Group 1 because this entity carries more differences than similarities with other PAH subgroups. In the current classification, PPHN is now designated number 1. Pulmonary hypertension associated with chronic hemolytic anemia has been moved from Group 1 PAH to Group 5, unclear/multifactorial mechanism. In addition, it was decided to add specific items related to pediatric pulmonary hypertension in order to create a comprehensive, common classification for both adults and children. Therefore, congenital or acquired left-heart inflow/outflow obstructive lesions and congenital cardiomyopathies have been added to Group 2, and segmental pulmonary hypertension has been added to Group 5. Last, there were no changes for Groups 2, 3, and 4.", "title": "" }, { "docid": "36fa816c5e738ea6171851fb3200f68d", "text": "Vehicle speed prediction provides important information for many intelligent vehicular and transportation applications. Accurate on-road vehicle speed prediction is challenging, because an individual vehicle speed is affected by many factors, e.g., the traffic condition, vehicle type, and driver’s behavior, in either deterministic or stochastic way. This paper proposes a novel data-driven vehicle speed prediction method in the context of vehicular networks, in which the real-time traffic information is accessible and utilized for vehicle speed prediction. It first predicts the average traffic speeds of road segments by using neural network models based on historical traffic data. Hidden Markov models (HMMs) are then utilized to present the statistical relationship between individual vehicle speeds and the traffic speed. Prediction for individual vehicle speeds is realized by applying the forward–backward algorithm on HMMs. To evaluate the prediction performance, simulations are set up in the SUMO microscopic traffic simulator with the application of a real Luxembourg motorway network and traffic count data. The vehicle speed prediction result shows that our proposed method outperforms other ones in terms of prediction accuracy.", "title": "" } ]
scidocsrr
01347f095bba102c22475914a023366c
Deep Semantic Architecture with discriminative feature visualization for neuroimage analysis
[ { "docid": "14e5874d0916a293eed6489130925098", "text": "Deep learning methods have recently made notable advances in the tasks of classification and representation learning. These tasks are important for brain imaging and neuroscience discovery, making the methods attractive for porting to a neuroimager's toolbox. Success of these methods is, in part, explained by the flexibility of deep learning models. However, this flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. These methods include deep belief networks and their building block the restricted Boltzmann machine. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.", "title": "" } ]
[ { "docid": "8899dc843831f592a89d0f6cf9688dfc", "text": "Deep neural networks have yielded immense success in speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks for recommender systems has received a relatively little introspection. Also, different recommendation scenarios have their own issues which creates the need for different approaches for recommendation. Specifically in news recommendation a major problem is that of varying user interests. In this work, we use deep neural networks with attention to tackle the problem of news recommendation. The key factor in user-item based collaborative filtering is to identify the interaction between user and item features. Matrix factorization is one of the most common approaches for identifying this interaction. It maps both the users and the items into a joint latent factor space such that user-item interactions in that space can be modeled as inner products in that space. Some recent work has used deep neural networks with the motive to learn an arbitrary function instead of the inner product that is used for capturing the user-item interaction. However, directly adapting it for the news domain does not seem to be very suitable. This is because of the dynamic nature of news readership where the interests of the users keep changing with time. Hence, it becomes challenging for recommendation systems to model both user preferences as well as account for the interests which keep changing over time. We present a deep neural model, where a non-linear mapping of users and item features are learnt first. For learning a non-linear mapping for the users we use an attention-based recurrent layer in combination with fully connected layers. For learning the mappings for the items we use only fully connected layers. We then use a ranking based objective function to learn the parameters of the network. We also use the content of the news articles as features for our model. Extensive experiments on a real-world dataset show a significant improvement of our proposed model over the state-of-the-art by 4.7% (Hit Ratio@10). Along with this, we also show the effectiveness of our model to handle the user cold-start and item cold-start problems. ? Vaibhav Kumar and Dhruv Khattar are the corresponding authors", "title": "" }, { "docid": "9e3d3783aa566b50a0e56c71703da32b", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "e1066f3b7ff82667dbc7186f357dd406", "text": "Generative adversarial networks (GANs) are becoming increasingly popular for image processing tasks. Researchers have started using GAN s for speech enhancement, but the advantage of using the GAN framework has not been established for speech enhancement. For example, a recent study reports encouraging enhancement results, but we find that the architecture of the generator used in the GAN gives better performance when it is trained alone using the $L_1$ loss. This work presents a new GAN for speech enhancement, and obtains performance improvement with the help of adversarial training. A deep neural network (DNN) is used for time-frequency mask estimation, and it is trained in two ways: regular training with the $L_1$ loss and training using the GAN framework with the help of an adversary discriminator. Experimental results suggest that the GAN framework improves speech enhancement performance. Further exploration of loss functions, for speech enhancement, suggests that the $L_1$ loss is consistently better than the $L_2$ loss for improving the perceptual quality of noisy speech.", "title": "" }, { "docid": "5b340560406b99bcb383816accf45060", "text": "Modern global managers are required to possess a set of competencies or multiple intelligences in order to meet pressing business challenges. Hence, expanding global managersâ€TM competencies is becoming an important issue. Many scholars and specialists have proposed various competency models containing a list of required competencies. But it is hard for someone to master a broad set of competencies at the same time. Here arises an imperative issue on how to enrich global managersâ€TM competencies by way of segmenting a set of competencies into some portions in order to facilitate competency development with a stepwise mode. To solve this issue involving the vagueness of human judgments, we have proposed an effective method combining fuzzy logic and Decision Making Trial and Evaluation Laboratory (DEMATEL) to segment required competencies for better promoting the competency development of global managers. Additionally, an empirical study is presented to illustrate the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "e9dcc0eb5894907142dffdf2aa233c35", "text": "The explosion of the web and the abundance of linked data demand for effective and efficient methods for storage, management and querying. More specifically, the ever-increasing size and number of RDF data collections raises the need for efficient query answering, and dictates the usage of distributed data management systems for effectively partitioning and querying them. To this direction, Apache Spark is one of the most active big-data approaches, with more and more systems adopting it, for efficient, distributed data management. The purpose of this paper is to provide an overview of the existing works dealing with efficient query answering, in the area of RDF data, using Apache Spark. We discuss on the characteristics and the key dimension of such systems, we describe novel ideas in the area, and the corresponding drawbacks, and provide directions for future work.", "title": "" }, { "docid": "87cfc5cad31751fd89c68dc9557eb33f", "text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.", "title": "" }, { "docid": "879cc991ec7353678cc22d6771684c3e", "text": "We demonstrate an x-axis Lorentz force sensor (LFS) for electronic compass applications. The sensor is based on a 30 μm thick torsional resonator fabricated in a process similar to that used in commercial MEMS gyroscopes. The sensor achieved a resolution of 210 nT/√Hz with a DC supply voltage of 2 V and driving power consumption of 1 mW. Bias instability was measured as 60 nT using the minimum Allan deviation. This mechanically-balanced torsional resonator also confers the advantage of low acceleration sensitivity; the measured response to acceleration is below the sensor's noise level.", "title": "" }, { "docid": "f9af6cca7d9ac18ace9bc6169b4393cc", "text": "Metric learning has become a widespreadly used tool in machine learning. To reduce expensive costs brought in by increasing dimensionality, low-rank metric learning arises as it can be more economical in storage and computation. However, existing low-rank metric learning algorithms usually adopt nonconvex objectives, and are hence sensitive to the choice of a heuristic low-rank basis. In this paper, we propose a novel low-rank metric learning algorithm to yield bilinear similarity functions. This algorithm scales linearly with input dimensionality in both space and time, therefore applicable to highdimensional data domains. A convex objective free of heuristics is formulated by leveraging trace norm regularization to promote low-rankness. Crucially, we prove that all globally optimal metric solutions must retain a certain low-rank structure, which enables our algorithm to decompose the high-dimensional learning task into two steps: an SVD-based projection and a metric learning problem with reduced dimensionality. The latter step can be tackled efficiently through employing a linearized Alternating Direction Method of Multipliers. The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions.", "title": "" }, { "docid": "ae9bdb80a60dd6820c1c9d9557a73ffc", "text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "0efa756a15219d8383ca296860f7433a", "text": "Chronic inflammation plays a multifaceted role in carcinogenesis. Mounting evidence from preclinical and clinical studies suggests that persistent inflammation functions as a driving force in the journey to cancer. The possible mechanisms by which inflammation can contribute to carcinogenesis include induction of genomic instability, alterations in epigenetic events and subsequent inappropriate gene expression, enhanced proliferation of initiated cells, resistance to apoptosis, aggressive tumor neovascularization, invasion through tumor-associated basement membrane and metastasis, etc. Inflammation-induced reactive oxygen and nitrogen species cause damage to important cellular components (e.g., DNA, proteins and lipids), which can directly or indirectly contribute to malignant cell transformation. Overexpression, elevated secretion, or abnormal activation of proinflammatory mediators, such as cytokines, chemokines, cyclooxygenase-2, prostaglandins, inducible nitric oxide synthase, and nitric oxide, and a distinct network of intracellular signaling molecules including upstream kinases and transcription factors facilitate tumor promotion and progression. While inflammation promotes development of cancer, components of the tumor microenvironment, such as tumor cells, stromal cells in surrounding tissue and infiltrated inflammatory/immune cells generate an intratumoral inflammatory state by aberrant expression or activation of some proinflammatory molecules. Many of proinflammatory mediators, especially cytokines, chemokines and prostaglandins, turn on the angiogenic switches mainly controlled by vascular endothelial growth factor, thereby inducing inflammatory angiogenesis and tumor cell-stroma communication. This will end up with tumor angiogenesis, metastasis and invasion. Moreover, cellular microRNAs are emerging as a potential link between inflammation and cancer. The present article highlights the role of various proinflammatory mediators in carcinogenesis and their promise as potential targets for chemoprevention of inflammation-associated carcinogenesis.", "title": "" }, { "docid": "c5bcc3434495d10627d05ed032661f94", "text": "An important part of textual information around the world contains some kind of geographic features. User queries with geographic references are becoming very common and human expectations from a search engine are even higher. Although several works have been focused on this area, the interpretation of the geographic information in order to better satisfy the user needs continues being a challenge. This work proposes different techniques which are involved in the process of identifying and analyzing the geographic information in textual documents and queries in natural languages. A geographic ontology GeoNW has been built by combining GeoNames, WordNet and Wikipedia resources. Based on the information stored in GeoNW, geographic terms are identified and an algorithm for solving the toponym disambiguation problem is proposed. Once the geographic information is processed, we obtain a geographic ranking list of documents which is combined with a standard textual ranking list of documents for producing the final results. GeoCLEF test collection is used for evaluating the accuracy of the result.", "title": "" }, { "docid": "e2b98c529a0175758b2edafe284d0dc7", "text": "This paper is concerned with the problem of fuzzy-filter design for discrete-time nonlinear systems in the Takagi-Sugeno (T-S) form. Different from existing fuzzy filters, the proposed ones are designed in finite-frequency domain. First, a so-called finite-frequency l2 gain is defined that extends the standard l2 gain. Then, a sufficient condition for the filtering-error system with a finite-frequency l2 gain is derived. Based on the obtained condition, three fuzzy filters are designed to deal with noises in the low-, middle-, and high-frequency domain, respectively. The proposed fuzzy-filtering method can get a better noise-attenuation performance when frequency ranges of noises are known beforehand. An example about a tunnel-diode circuit is given to illustrate its effectiveness.", "title": "" }, { "docid": "47e11b1d734b1dcacc182e55d378f2a2", "text": "Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.", "title": "" }, { "docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7", "text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.", "title": "" }, { "docid": "774f4189181c6cdf666ecb5402969a5a", "text": "INTRODUCTION\nOsteopathic Manipulative Treatment (OMT) is effective in improving function, movement and restoring pain conditions. Despite clinical results, the mechanisms of how OMT achieves its' effects remain unclear. The fascial system is described as a tensional network that envelops the human body. Direct or indirect manipulations of the fascial system are a distinctive part of OMT.\n\n\nOBJECTIVE\nThis review describes the biological effects of direct and indirect manipulation of the fascial system.\n\n\nMATERIAL AND METHODS\nLiterature search was performed in February 2016 in the electronic databases: Cochrane, Medline, Scopus, Ostmed, Pedro and authors' publications relative to Fascia Research Congress Website.\n\n\nRESULTS\nManipulation of the fascial system seems to interfere with some cellular processes providing various pro-inflammatory and anti-inflammatory cells and molecules.\n\n\nDISCUSSION\nDespite growing research in the osteopathic field, biological effects of direct or indirect manipulation of the fascial system are not conclusive.\n\n\nCONCLUSION\nTo elevate manual medicine as a primary intervention in clinical settings, it's necessary to clarify how OMT modalities work in order to underpin their clinical efficacies.", "title": "" }, { "docid": "87f05972a93b2b432d0dad6d55e97502", "text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.", "title": "" }, { "docid": "699c6a7b4f938d6a45d65878f08335e4", "text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.", "title": "" }, { "docid": "b1c00b7801a51d11a8384e5977d7e041", "text": "In this article, we report the results of 2 studies that were conducted to investigate whether adult attachment theory explains employee behavior at work. In the first study, we examined the structure of a measure of adult attachment and its relations with measures of trait affectivity and the Big Five. In the second study, we examined the relations between dimensions of attachment and emotion regulation behaviors, turnover intentions, and supervisory reports of counterproductive work behavior and organizational citizenship behavior. Results showed that anxiety and avoidance represent 2 higher order dimensions of attachment that predicted these criteria (except for counterproductive work behavior) after controlling for individual difference variables and organizational commitment. The implications of these results for the study of attachment at work are discussed.", "title": "" }, { "docid": "160d488f12fa1db16756df36c649a76a", "text": "Cutaneous metastases are a rare event, representing 0.7% to 2.0% of all cutaneous malignant neoplasms. They may be the first sign of a previously undiagnosed visceral malignancy or the initial presentation of a recurrent neoplasm. The frequency of cutaneous metastases according to the type of underlying malignancies varies with sex. In men, the most common internal malignancies leading to cutaneous metastases are lung cancer, colon cancer, melanoma, squamous cell carcinoma of the oral cavity, and renal cell carcinoma. In women, breast cancer, colon cancer, melanoma, lung cancer, and ovarian cancer are the most common malignancies leading to cutaneous metastases.", "title": "" } ]
scidocsrr
a4fed8b3c8cd87d441f99f105565201d
An investigation of imitation learning algorithms for structured prediction
[ { "docid": "61ae61d0950610ee2ad5e07f64f9b983", "text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" } ]
[ { "docid": "25c815f5fc0cf87bdef5e069cbee23a8", "text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.", "title": "" }, { "docid": "24957794ed251c2e970d787df6d87064", "text": "Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs.", "title": "" }, { "docid": "2e9d0bf42b8bb6eb8752e89eb46f2fc5", "text": "What is the growth pattern of social networks, like Facebook and WeChat? Does it truly exhibit exponential early growth, as predicted by textbook models like the Bass model, SI, or the Branching Process? How about the count of links, over time, for which there are few published models?\n We examine the growth of several real networks, including one of the world's largest online social network, ``WeChat'', with 300 million nodes and 4.75 billion links by 2013; and we observe power law growth for both nodes and links, a fact that completely breaks the sigmoid models (like SI, and Bass). In its place, we propose NETTIDE, along with differential equations for the growth of the count of nodes, as well as links. Our model accurately fits the growth patterns of real graphs; it is general, encompassing as special cases all the known, traditional models (including Bass, SI, log-logistic growth); while still remaining parsimonious, requiring only a handful of parameters. Moreover, our NETTIDE for link growth is the first one of its kind, accurately fitting real data, and naturally leading to the densification phenomenon. We validate our model with four real, time-evolving social networks, where NETTIDE gives good fitting accuracy, and, more importantly, applied on the WeChat data, our NETTIDE forecasted more than 730 days into the future, with 3% error.", "title": "" }, { "docid": "4301af5b0c7910480af37f01847fb1fe", "text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.", "title": "" }, { "docid": "01cb25375745cd8fdc6d2a546910acb4", "text": "Digital technology innovations have led to significant changes in everyday life, made possible by the widespread use of computers and continuous developments in information technology (IT). Based on the utilization of systems applying 3D(three-dimensional) technology, as well as virtual and augmented reality techniques, IT has become the basis for a new fashion industry model, featuring consumer-centered service and production methods. Because of rising wages and production costs, the fashion industry’s international market power has been significantly weakened in recent years. To overcome this situation, new markets must be established by building a new knowledge and technology-intensive fashion industry. Development of virtual clothing simulation software, which has played an important role in the fashion industry’s IT-based digitalization, has led to continuous technological improvements for systems that can virtually adapt existing 2D(two-dimensional) design work to 3D design work. Such adaptions have greatly influenced the fashion industry by increasing profits. Both here and abroad, studies have been conducted to support the development of consumercentered, high value-added clothing and fashion products by employing digital technology. This study proposes a system that uses a depth camera to capture the figure of a user standing in front of a large display screen. The display can show fashion concepts and various outfits to the user, coordinated to his or her body. Thus, a “magic mirror” effect is produced. Magic mirror-based fashion apparel simulation can support total fashion coordination for accessories and outfits automatically, and does not require computer or fashion expertise. This system can provide convenience for users by assuming the role of a professional fashion coordinator giving an appearance presentation. It can also be widely used to support a customized method for clothes shopping.", "title": "" }, { "docid": "c8767f1fbcd84b1973b0007110a77d2c", "text": "OBJECTIVES\nThe purpose of present article was to review the classifications suggested for assessment of the jawbone anatomy, to evaluate the diagnostic possibilities of mandibular canal identification and risk of inferior alveolar nerve injury, aesthetic considerations in aesthetic zone, as well as to suggest new classification system of the jawbone anatomy in endosseous dental implant treatment.\n\n\nMATERIAL AND METHODS\nLiterature was selected through a search of PubMed, Embase and Cochrane electronic databases. The keywords used for search were mandible; mandibular canal; alveolar nerve, inferior; anatomy, cross-sectional; dental implants; classification. The search was restricted to English language articles, published from 1972 to March 2013. Additionally, a manual search in the major anatomy and oral surgery books were performed. The publications there selected by including clinical and human anatomy studies.\n\n\nRESULTS\nIn total 109 literature sources were obtained and reviewed. The classifications suggested for assessment of the jawbone anatomy, diagnostic possibilities of mandibular canal identification and risk of inferior alveolar nerve injury, aesthetic considerations in aesthetic zone were discussed. New classification system of the jawbone anatomy in endosseous dental implant treatment based on anatomical and radiologic findings and literature review results was suggested.\n\n\nCONCLUSIONS\nThe classification system proposed here based on anatomical and radiological jawbone quantity and quality evaluation is a helpful tool for planning of treatment strategy and collaboration among specialists. Further clinical studies should be conducted for new classification validation and reliability evaluation.", "title": "" }, { "docid": "ea5ff4f4060818d0f83cbc8314af2b9e", "text": "A winglet is a device attached at the wingtip, used to improve aircraft efficiency by lowering the induced drag caused by wingtip vortices. It is a vertical or angled extension at the tips of each wing. Winglets work by increasing the effective aspect ratio of a wing without adding greatly to the structural stress and hence necessary weight of the wing structure. This paper describes a CFD 3-dimensional winglets analysis that was performed on a rectangular wing of NACA65 3 218 cross sectional airfoil. The wing is of 660 mm span and 121 mm chord and was analyzed for two shape configurations, semicircle and elliptical. The objectives of the analysis were to compare the aerodynamic characteristics of the two winglet configurations and to investigate the performance of the two winglets shape simulated at selected cant angle of 0, 45 and 60 degrees.", "title": "" }, { "docid": "4cc3f3a5e166befe328b6e18bc836e89", "text": "Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. High-quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage|SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines.", "title": "" }, { "docid": "681f36fde6ec060baa76a6722a62ccbc", "text": "This study determined if any of six endodontic solutions would have a softening effect on resorcinol-formalin paste in extracted teeth, and if there were any differences in the solvent action between these solutions. Forty-nine single-rooted extracted teeth were decoronated 2 mm coronal to the CEJ, and the roots sectioned apically to a standard length of 15 mm. Canals were prepared to a 12 mm WL and a uniform size with a #7 Parapost drill. Teeth were then mounted in a cylinder ring with acrylic. The resorcinol-formalin mixture was placed into the canals and was allowed to set for 60 days in a humidor. The solutions tested were 0.9% sodium chloride, 5.25% sodium hypochlorite, chloroform, Endosolv R (Endosolv R), 3% hydrogen peroxide, and 70% isopropyl alcohol. Seven samples per solution were tested and seven samples using water served as controls. One drop of the solution was placed over the set mixture in the canal, and the depth of penetration of a 1.5-mm probe was measured at 2, 5, 10, and 20 min using a dial micrometer gauge. A repeated-measures ANOVA showed a difference in penetration between the solutions at 10 min (p = 0.04) and at 20 min (p = 0.0004). At 20 min, Endosolv R, had significantly greater penetration than 5.25% sodium hypochlorite (p = 0.0033) and chloroform (p = 0.0018); however, it was not significantly better than the control (p = 0.0812). Although Endosolv R, had statistically superior probe penetration at 20 min, the softening effect could not be detected clinically at this time.", "title": "" }, { "docid": "33edd1c2ad88c3693a96f7d3340b061c", "text": "The strength of diapycnal mixing by small-scale motions in a stratified fluid is investigated through changes to the mean buoyancy profile. We study the mixing in laboratory experiments in which an initially linearly stratified fluid is stirred with a rake of vertical bars. The flow evolution Ž . depends on the Richardson number Ri , defined as the ratio of buoyancy forces to inertial forces. At low Ri, the buoyancy flux is a function of the local buoyancy gradient only, and may be modelled as gradient diffusion with a Ri-dependent eddy diffusivity. At high Ri, vertical vorticity shed in the wakes of the bars interacts with the stratification and produces well-mixed layers separated by interfaces. This process leads to layers with a thickness proportional to the ratio of Ž . grid velocity to buoyancy frequency for a wide range of Reynolds numbers Re and grid solidities. In this regime, the buoyancy flux is not a function of the local gradient alone, but also depends on the local structure of the buoyancy profile. Consequently, the layers are not formed by the PhillipsrPosmentier mechanism, and we show that they result from vortical mixing previously thought to occur only at low Re. The initial mixing efficiency shows a maximum at a critical Ri which separates the two classes of behaviour. The mixing efficiency falls as the fluid mixes and as the layered structure intensifies and, therefore, the mixing efficiency depends not only on the overall Ri, but also on the dynamics of the structure in the buoyancy field. We discuss some implications of these results to the atmosphere and oceans. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "078d3fde34bbcdbb3806d13c3e6cb2dd", "text": "This paper reviews the concept of adaptation of human communities to global changes, especially climate change, in the context of adaptive capacity and vulnerability. It focuses on scholarship that contributes to practical implementation of adaptations at the community scale. In numerous social science fields, adaptations are considered as responses to risks associated with the interaction of environmental hazards and human vulnerability or adaptive capacity. In the climate change field, adaptation analyses have been undertaken for several distinct purposes. Impact assessments assume adaptations to estimate damages to longer term climate scenarios with and without adjustments. Evaluations of specified adaptation options aim to identify preferred measures. Vulnerability indices seek to provide relative vulnerability scores for countries, regions or communities. The main purpose of participatory vulnerability assessments is to identify adaptation strategies that are feasible and practical in communities. The distinctive features of adaptation analyses with this purpose are outlined, and common elements of this approach are described. Practical adaptation initiatives tend to focus on risks that are already problematic, climate is considered together with other environmental and social stresses, and adaptations are mostly integrated or mainstreamed into other resource management, disaster preparedness and sustainable development programs. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5d557ecb67df253662e37d6ec030d055", "text": "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.", "title": "" }, { "docid": "5f33fb32ac9a278f7184ac384dc367ab", "text": "The new technologies characterizing the Internet of Things (IoT) allow realizing real smart environments able to provide advanced services to the users. Recently, these smart environments are also being exploited to renovate the users' interest on the cultural heritage, by guaranteeing real interactive cultural experiences. In this paper, we design and validate an indoor location-aware architecture able to enhance the user experience in a museum. In particular, the proposed system relies on a wearable device that combines image recognition and localization capabilities to automatically provide the users with cultural contents related to the observed artworks. The localization information is obtained by a Bluetooth low energy (BLE) infrastructure installed in the museum. Moreover, the system interacts with the Cloud to store multimedia contents produced by the user and to share environment-generated events on his/her social networks. Finally, several location-aware services, running in the system, control the environment status also according to users' movements. These services interact with physical devices through a multiprotocol middleware. The system has been designed to be easily extensible to other IoT technologies and its effectiveness has been evaluated in the MUST museum, Lecce, Italy.", "title": "" }, { "docid": "44402fdc3c9f2c6efaf77a00035f38ad", "text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.", "title": "" }, { "docid": "d98f60a2a0453954543da840076e388a", "text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.", "title": "" }, { "docid": "505a150ad558f60a57d7f708a05288f3", "text": "Probiotic supplements in food industry have attracted a lot of attention and shown a remarkable growth in this field. Metabolic engineering (ME) approaches enable understanding their mechanism of action and increases possibility of designing probiotic strains with desired functions. Probiotic microorganisms generally referred as industrially important lactic acid bacteria (LAB) which are involved in fermenting dairy products, food, beverages and produces lactic acid as final product. A number of illustrations of metabolic engineering approaches in industrial probiotic bacteria have been described in this review including transcriptomic studies of Lactobacillus reuteri and improvement in exopolysaccharide (EPS) biosynthesis yield in Lactobacillus casei LC2W. This review summaries various metabolic engineering approaches for exploring metabolic pathways. These approaches enable evaluation of cellular metabolic state and effective editing of microbial genome or introduction of novel enzymes to redirect the carbon fluxes. In addition, various system biology tools such as in silico design commonly used for improving strain performance is also discussed. Finally, we discuss the integration of metabolic engineering and genome profiling which offers a new way to explore metabolic interactions, fluxomics and probiogenomics using probiotic bacteria like Bifidobacterium spp and Lactobacillus spp.", "title": "" }, { "docid": "5d48b6fcc1d8f1050b5b5dc60354fedb", "text": "The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et al. (2018) which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by 35% on average, while preserving performance of belief state tracking, by 97.38% on turn request and 88.51% on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy.", "title": "" }, { "docid": "707b75a5fa5e796c18bcaf17cd43075d", "text": "This paper presents a new feedback control strategy for balancing individual DC capacitor voltages in a three-phase cascade multilevel inverter-based static synchronous compensator. The design of the control strategy is based on the detailed small-signal model. The key part of the proposed controller is a compensator to cancel the variation parts in the model. The controller can balance individual DC capacitor voltages when H-bridges run with different switching patterns and have parameter variations. It has two advantages: 1) the controller can work well in all operation modes (the capacitive mode, the inductive mode, and the standby mode) and 2) the impact of the individual DC voltage controller on the voltage quality is small. Simulation results and experimental results verify the performance of the controller.", "title": "" }, { "docid": "e6d5781d32e76d9c5f7c4ea985568986", "text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.", "title": "" }, { "docid": "cac0de9be06166653af16275a9b54878", "text": "Community-based question answering(CQA) services have arisen as a popular knowledge sharing pattern for netizens. With abundant interactions among users, individuals are capable of obtaining satisfactory information. However, it is not effective for users to attain answers within minutes. Users have to check the progress over time until the satisfying answers submitted. We address this problem as a user personalized satisfaction prediction task. Existing methods usually exploit manual feature selection. It is not desirable as it requires careful design and is labor intensive. In this paper, we settle this issue by developing a new multiple instance deep learning framework. Specifically, in our settings, each question follows a weakly supervised learning (multiple instance learning) assumption, where its obtained answers can be regarded as instance sets and we define the question resolved with at least one satisfactory answer. We thus design an efficient framework exploiting multiple instance learning property with deep learning tactic to model the question-answer pairs relevance and rank the asker’s satisfaction possibility. Extensive experiments on large-scale datasets from Stack Exchange demonstrate the feasibility of our proposed framework in predicting askers personalized satisfaction. Our framework can be extended to numerous applications such as UI satisfaction Prediction, multi-armed bandit problem, expert finding and so on.", "title": "" } ]
scidocsrr
d20444f2aeb0bcbc25835726b89a2fb1
Better cross company defect prediction
[ { "docid": "dc66c80a5031c203c41c7b2908c941a3", "text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!", "title": "" }, { "docid": "697580dda38c9847e9ad7c6a14ad6cd0", "text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.", "title": "" } ]
[ { "docid": "56d9b47d1860b5a80c62da9f75b6769d", "text": "Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users.", "title": "" }, { "docid": "0488511dc0641993572945e98a561cc7", "text": "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.", "title": "" }, { "docid": "c8977fe68b265b735ad4261f5fe1ec25", "text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.", "title": "" }, { "docid": "36357f48cbc3ed4679c679dcb77bdd81", "text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.", "title": "" }, { "docid": "fb8201417666d992d508538583c5713f", "text": "We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.", "title": "" }, { "docid": "1dee93ec9e8de1cf365534581fb19623", "text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "a9069e2560b78e97bf8e76889041a201", "text": "We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent’s body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent’s own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the body— touch sensors, proprioception and vestibular information—leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.", "title": "" }, { "docid": "d04e975e48bd385a69fdf58c93103fd3", "text": "In this paper we will present a low-phase-noise wide-tuning-range oscillator suitable for scaled CMOS processes. It switches between the two resonant modes of a high-order LC resonator that consists of two identical LC tanks coupled by capacitor and transformer. The mode switching method does not add lossy switches to the resonator and thus doubles frequency tuning range without degrading phase noise performance. Moreover, the coupled resonator leads to 3 dB lower phase noise than a single LC tank, which provides a way of achieving low phase noise in scaled CMOS process. Finally, the novel way of using inductive and capacitive coupling jointly decouples frequency separation and tank impedances of the two resonant modes, and makes it possible to achieve balanced performance. The proposed structure is verified by a prototype in a low power 65 nm CMOS process, which covers all cellular bands with a continuous tuning range of 2.5-5.6 GHz and meets all stringent phase noise specifications of cellular standards. It uses a 0.6 V power supply and achieves excellent phase noise figure-of-merit (FoM) of 192.5 dB at 3.7 GHz and >; 188 dB across the entire tuning range. This demonstrates the possibility of achieving low phase noise and wide tuning range at the same time in scaled CMOS processes.", "title": "" }, { "docid": "5fd840b020b69c9588faf575f8079e83", "text": "We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-preserving photo sharing that encrypts the significant JPEG coefficients to make images unrecognizable by humans. We empirically show how to train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques.", "title": "" }, { "docid": "4a1a1b3012f2ce941cc532a55b49f09b", "text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.", "title": "" }, { "docid": "0c43c0dbeaff9afa0e73bddb31c7dac0", "text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.", "title": "" }, { "docid": "46bc17ab45e11b5c9c07200a60db399f", "text": "Locality-sensitive hashing (LSH) is a basic primitive in several large-scale data processing applications, including nearest-neighbor search, de-duplication, clustering, etc. In this paper we propose a new and simple method to speed up the widely-used Euclidean realization of LSH. At the heart of our method is a fast way to estimate the Euclidean distance between two d-dimensional vectors; this is achieved by the use of randomized Hadamard transforms in a non-linear setting. This decreases the running time of a (k, L)-parameterized LSH from O(dkL) to O(dlog d + kL). Our experiments show that using the new LSH in nearest-neighbor applications can improve their running times by significant amounts. To the best of our knowledge, this is the first running time improvement to LSH that is both provable and practical.", "title": "" }, { "docid": "efcf84406a2218deeb4ca33cb8574172", "text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.", "title": "" }, { "docid": "2f88356c3a1ab60e3dd084f7d9630c70", "text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.", "title": "" }, { "docid": "6341eaeb32d0e25660de6be6d3943e81", "text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.", "title": "" }, { "docid": "4bce473bb65dfc545d5895c7edb6cea6", "text": "mathematical framework of the population equations. It will turn out that the results are – of course – consistent with those derived from the population equation. We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large (N →∞). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at t̂ = 0. When will the neurons fire again? Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons θ = ui(t) = η(t− t̂i) + ∑", "title": "" }, { "docid": "17caec370a97af736d948123f9e7be73", "text": "Multiple-purpose forensics has been attracting increasing attention worldwide. However, most of the existing methods based on hand-crafted features often require domain knowledge and expensive human labour and their performances can be affected by factors such as image size and JPEG compression. Furthermore, many anti-forensic techniques have been applied in practice, making image authentication more difficult. Therefore, it is of great importance to develop methods that can automatically learn general and robust features for image operation detectors with the capability of countering anti-forensics. In this paper, we propose a new convolutional neural network (CNN) approach for multi-purpose detection of image manipulations under anti-forensic attacks. The dense connectivity pattern, which has better parameter efficiency than the traditional pattern, is explored to strengthen the propagation of general features related to image manipulation detection. When compared with three state-of-the-art methods, experiments demonstrate that the proposed CNN architecture can achieve a better performance (i.e., with a 11% improvement in terms of detection accuracy under anti-forensic attacks). The proposed method can also achieve better robustness against JPEG compression with maximum improvement of 13% on accuracy under low-quality JPEG compression.", "title": "" }, { "docid": "36e238fa3c85b41a062d08fd9844c9be", "text": "Building generalization is a difficult operation due to the complexity of the spatial distribution of buildings and for reasons of spatial recognition. In this study, building generalization is decomposed into two steps, i.e. building grouping and generalization execution. The neighbourhood model in urban morphology provides global constraints for guiding the global partitioning of building sets on the whole map by means of roads and rivers, by which enclaves, blocks, superblocks or neighbourhoods are formed; whereas the local constraints from Gestalt principles provide criteria for the further grouping of enclaves, blocks, superblocks and/or neighbourhoods. In the grouping process, graph theory, Delaunay triangulation and the Voronoi diagram are employed as supporting techniques. After grouping, some useful information, such as the sum of the building’s area, the mean separation and the standard deviation of the separation of buildings, is attached to each group. By means of the attached information, an appropriate operation is selected to generalize the corresponding groups. Indeed, the methodology described brings together a number of welldeveloped theories/techniques, including graph theory, Delaunay triangulation, the Voronoi diagram, urban morphology and Gestalt theory, in such a way that multiscale products can be derived.", "title": "" }, { "docid": "f4fb4638bb8bc6ae551dc729b6bcea2e", "text": "mark of facial attractiveness.1,2 Skeletal asymmetries generally require surgical intervention to improve facial esthetics and correct any associated malocclusions. The classic approach in volves a presurgical phase of orthodontics, during which dental compensations are eliminated, and a postsurgical phase to refine the occlusion. The presurgical phase can be lengthy, involving tooth decompensations that often exaggerate the existing dentofacial deformities.3 Skeletal anchorage now makes it possible to eliminate the presurgical orthodontic phase and to correct minor surgical inaccuracies and relapse tendencies after surgery. In addition to a significant reduction in treatment time, this approach offers immediate gratification in the correction of facial deformities,2 which can translate into better patient compliance with elastic wear and appointments. Another reported advantage is the elimination of soft-tissue imbalances that might interfere with ortho dontic tooth movements. This article describes a “surgery first” approach in a patient with complex dentofacial asymmetry and Class III malocclusion.", "title": "" } ]
scidocsrr
68ab5cce56a5d1352e1e211e33aec611
Memory Warps for Learning Long-Term Online Video Representations
[ { "docid": "76ad212ccd103c93d45c1ffa0e208b45", "text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "title": "" }, { "docid": "7240d65e0bc849a569d840a461157b2c", "text": "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.", "title": "" }, { "docid": "a5aff68d94b1fcd5fef109f8685b8b4a", "text": "We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.", "title": "" }, { "docid": "5a2dcebfadb2e52d1f506b5e8e6547d8", "text": "The ability to predict and therefore to anticipate the future is an important attribute of intelligence. It is also of utmost importance in real-time systems, e.g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. While prediction of the raw RGB pixel values in future video frames has been studied in previous work, here we introduce the novel task of predicting semantic segmentations of future frames. Given a sequence of video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. We develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. Our results on the Cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future RGB frames. Prediction results up to half a second in the future are visually convincing and are much more accurate than those of a baseline based on warping semantic segmentations using optical flow.", "title": "" } ]
[ { "docid": "115d3bc01e9b7fe41bdd9fc987c8676c", "text": "A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups-lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as \"uncorrupted,\" provided that it belongs to the \"uncorrupted\" pixel group, or \"corrupted.\" For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy-in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.", "title": "" }, { "docid": "cbc0e3dff1d86d88c416b1119fd3da82", "text": "One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, ar X iv :1 71 2. 02 05 2v 1 [ cs .R O ] 6 D ec 2 01 7 and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.", "title": "" }, { "docid": "e9779af1233484b2ce9cc23d03c9beec", "text": "A number of pixel-based image fusion algorithms (using averaging, contrast pyramids, the discrete wavelet transform and the dualtree complex wavelet transform (DT-CWT) to perform fusion) are reviewed and compared with a novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules. The DT-CWT method could dissolve an image into simpler data so we could analyze the characteristic which contained within the image and then fused it with other image that had already been decomposed and DT-CWT could reconstruct the image into its original form without losing its original data. The pixel-based and region-based rules are compared to know each of their capability and performance. Region-based methods have a number of advantages over pixel-based methods. These include: the ability to use more intelligent semantic fusion rules; and for regions with certain properties to be attenuated or accentuated.", "title": "" }, { "docid": "c8b382852f445c6f05c905371330dd07", "text": "Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise.", "title": "" }, { "docid": "d15add461f0ca58de13b3dc975f7fef7", "text": "A frequency compensation technique improving characteristic of power supply rejection ratio (PSRR) for two-stage operational amplifiers is presented. This technique is applicable to most known two-stage amplifier configurations. The detailed small-signal analysis of an exemplary amplifier with the proposed compensation and a comparison to its basic version reveal several benefits of the technique which can be effectively exploited in continuous-time filter designs. This comparison shows the possibility of PSRR bandwidth broadening of more than a decade, significant reduction of chip area, the unity-gain bandwidth and power consumption improvement. These benefits are gained at the cost of a non-monotonic phase characteristic of the open-loop differential voltage gain and limitation of a close-loop voltage gain. A prototype-integrated circuit, fabricated based on 0.35 mm complementary metal-oxide semiconductor technology, was used for the technique verification. Two pairs of amplifiers with the classical Miller compensation and a cascoded input stage were measured and compared to their improved counterparts. The measurement data fully confirm the theoretically predicted advantages of the proposed compensation technique.", "title": "" }, { "docid": "2cff047c4b2577c99aa66df211b0beda", "text": "Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.", "title": "" }, { "docid": "173c0124ac81cfe8fa10fbdc20a1a094", "text": "This paper presents a new approach to compare fuzzy numbers using α-distance. Initially, the metric distance on the interval numbers based on the convex hull of the endpoints is proposed and it is extended to fuzzy numbers. All the properties of the α-distance are proved in details. Finally, the ranking of fuzzy numbers by the α-distance is discussed. In addition, the proposed method is compared with some known ones, the validity of the new method is illustrated by applying its to several group of fuzzy numbers.", "title": "" }, { "docid": "e8ff86bd701792e6eb5f2fa8fcc2e028", "text": "Memory layout transformations via data reorganization are very common operations, which occur as a part of the computation or as a performance optimization in data-intensive applications. These operations require inefficient memory access patterns and roundtrip data movement through the memory hierarchy, failing to utilize the performance and energy-efficiency potentials of the memory subsystem. This paper proposes a high-bandwidth and energy-efficient hardware accelerated memory layout transform (HAMLeT) system integrated within a 3D-stacked DRAM. HAMLeT uses a low-overhead hardware that exploits the existing infrastructure in the logic layer of 3D-stacked DRAMs, and does not require any changes to the DRAM layers, yet it can fully exploit the locality and parallelism within the stack by implementing efficient layout transform algorithms. We analyze matrix layout transform operations (such as matrix transpose, matrix blocking and 3D matrix rotation) and demonstrate that HAMLeT can achieve close to peak system utilization, offering up to an order of magnitude performance improvement compared to the CPU and GPU memory subsystems which does not employ HAMLeT.", "title": "" }, { "docid": "6b0294315128234ccdbec4532e6c4f7a", "text": "Carrying out similarity and analogy comparisons can be modeled as the alignment and mapping of structured representations. In this article we focus on three aspects of comparison that are central in structure-mapping theory. All three are controversial. First, comparison involves structured representations. Second, the comparison process is driven by a preference for connected relational structure. Third, the mapping between domains is rooted in semantic similarity between the relations that characterize the domains. For each of these points, we review supporting evidence and discuss some challenges raised by other researchers. We end with a discussion of the role of structure mapping in other cognitive processes.", "title": "" }, { "docid": "888efce805d5271f0b6571748793c4c6", "text": "Pedagogical changes and new models of delivering educational content should be considered in the effort to address the recommendations of the 2007 Institute of Medicine report and Benner's recommendations on the radical transformation of nursing. Transition to the nurse anesthesia practice doctorate addresses the importance of these recommendations, but educational models and specific strategies on how to implement changes in educational models and systems are still emerging. The flipped classroom (FC) is generating a considerable amount of buzz in academic circles. The FC is a pedagogical model that employs asynchronous video lectures, reading assignments, practice problems, and other digital, technology-based resources outside the classroom, and interactive, group-based, problem-solving activities in the classroom. This FC represents a unique combination of constructivist ideology and behaviorist principles, which can be used to address the gap between didactic education and clinical practice performance. This article reviews recent evidence supporting use of the FC in health profession education and suggests ways to implement the FC in nurse anesthesia educational programs.", "title": "" }, { "docid": "06b99205e1dc53e5120a22dc4f927aa0", "text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record", "title": "" }, { "docid": "5a08b007fbe1a424f9788ea68ec47d80", "text": "We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset’s geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.", "title": "" }, { "docid": "752cf1c7cefa870c01053d87ff4f445c", "text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.", "title": "" }, { "docid": "f6df414f8f61dbdab32be2f05d921cb8", "text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.", "title": "" }, { "docid": "dfba47fd3b84d6346052b559568a0c21", "text": "Understanding gaming motivations is important given the growing trend of incorporating game-based mechanisms in non-gaming applications. In this paper, we describe the development and validation of an online gaming motivations scale based on a 3-factor model. Data from 2,071 US participants and 645 Hong Kong and Taiwan participants is used to provide a cross-cultural validation of the developed scale. Analysis of actual in-game behavioral metrics is also provided to demonstrate predictive validity of the scale.", "title": "" }, { "docid": "b4c25df52a0a5f6ab23743d3ca9a3af2", "text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.", "title": "" }, { "docid": "cf8bf65059568ca717289d8f23b25b38", "text": "AIM\nThis paper aims to systematically review studies investigating the strength of association between FMS composite scores and subsequent risk of injury, taking into account both methodological quality and clinical and methodological diversity.\n\n\nDESIGN\nSystematic review with meta-analysis.\n\n\nDATA SOURCES\nA systematic search of electronic databases was conducted for the period between their inception and 3 March 2016 using PubMed, Medline, Google Scholar, Scopus, Academic Search Complete, AMED (Allied and Complementary Medicine Database), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Health Source and SPORTDiscus.\n\n\nELIGIBILITY CRITERIA FOR SELECTING STUDIES\nInclusion criteria: (1) English language, (2) observational prospective cohort design, (3) original and peer-reviewed data, (4) composite FMS score, used to define exposure and non-exposure groups and (5) musculoskeletal injury, reported as the outcome.\n\n\nEXCLUSION CRITERIA\n(1) data reported in conference abstracts or non-peer-reviewed literature, including theses, and (2) studies employing cross-sectional or retrospective study designs.\n\n\nRESULTS\n24 studies were appraised using the Quality of Cohort Studies assessment tool. In male military personnel, there was 'strong' evidence that the strength of association between FMS composite score (cut-point ≤14/21) and subsequent injury was 'small' (pooled risk ratio=1.47, 95% CI 1.22 to 1.77, p<0.0001, I2=57%). There was 'moderate' evidence to recommend against the use of FMS composite score as an injury prediction test in football (soccer). For other populations (including American football, college athletes, basketball, ice hockey, running, police and firefighters), the evidence was 'limited' or 'conflicting'.\n\n\nCONCLUSION\nThe strength of association between FMS composite scores and subsequent injury does not support its use as an injury prediction tool.\n\n\nTRIAL REGISTRATION NUMBER\nPROSPERO registration number CRD42015025575.", "title": "" }, { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" } ]
scidocsrr
bde7b86f912c0b9f51107f1cdafd9552
Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline
[ { "docid": "f5f56d680fbecb94a08d9b8e5925228f", "text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "beb1c8ba8809d1ac409584bea1495654", "text": "Multimodal information processing has received considerable attention in recent years. The focus of existing research in this area has been predominantly on the use of fusion technology. In this paper, we suggest that cross-modal association can provide a new set of powerful solutions in this area. We investigate different cross-modal association methods using the linear correlation model. We also introduce a novel method for cross-modal association called Cross-modal Factor Analysis (CFA). Our earlier work on Latent Semantic Indexing (LSI) is extended for applications that use off-line supervised training. As a promising research direction and practical application of cross-modal association, cross-modal information retrieval where queries from one modality are used to search for content in another modality using low-level features is then discussed in detail. Different association methods are tested and compared using the proposed cross-modal retrieval system. All these methods achieve significant dimensionality reduction. Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads. The CFA method achieves 91.1% detection accuracy, while LSI and Canonical Correlation Analysis (CCA) achieve 66.1% and 73.9% accuracy, respectively. As shown by experiments, cross-modal association provides many useful benefits, such as robust noise resistance and effective feature selection. Compared to CCA and LSI, the proposed CFA shows several advantages in analysis performance and feature usage. Its capability in feature selection and noise resistance also makes CFA a promising tool for many multimedia analysis applications.", "title": "" }, { "docid": "c6b1ad47687dbd86b28a098160f406bb", "text": "The development of a 10-item self-report scale (EPDS) to screen for Postnatal Depression in the community is described. After extensive pilot interviews a validation study was carried out on 84 mothers using the Research Diagnostic Criteria for depressive illness obtained from Goldberg's Standardised Psychiatric Interview. The EPDS was found to have satisfactory sensitivity and specificity, and was also sensitive to change in the severity of depression over time. The scale can be completed in about 5 minutes and has a simple method of scoring. The use of the EPDS in the secondary prevention of Postnatal Depression is discussed.", "title": "" }, { "docid": "246bbb92bc968d20866b8c92a10f8ac7", "text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.", "title": "" }, { "docid": "8518dc45e3b0accfc551111489842359", "text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.", "title": "" }, { "docid": "a63f9b27e27393bb432198f18c3d89e1", "text": "Accounting information system had been widely used by many organizations to automate and integrate their business operations .The main objective s of many businesses to adopt this system are to improve their business efficiency and increase competitiveness. The qualitative characteristic of any Accounting Information System can be maintained if there is a sound internal control system. Internal control is run to ensure the achievement of operational goals and performance. Therefore the purpose of this study is to examine the efficiency of Accounting Information System on performance measures using the secondary data in which it was found that accounting information system is of great importance to both businesses and organization in which it helps in facilitating management decision making, internal controls ,quality of the financial report ,and it facilitates the company’s transaction and it also plays an important role in economic system, and the study recommends that businesses, firms and organization should adopt the use of AIS because adequate accounting information is essential for every effective decision making process and adequate information is possible if accounting information systems are run efficiently also, efficient Accounting Information Systems ensures that all levels of management get sufficient, adequate, relevant and true information for planning and controlling activities of the business organization.", "title": "" }, { "docid": "7963adab39b58ab0334b8eef4149c59c", "text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.", "title": "" }, { "docid": "df0381c129339b1131897708fc00a96c", "text": "We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.", "title": "" }, { "docid": "65a7e691f8bb6831c269cf5770271325", "text": "Seven types of evidence are reviewed that indicate that high subjective wellbeing (such as life satisfaction, absence of negative emotions, optimism, and positive emotions) causes better health and longevity. For example, prospective longitudinal studies of normal populations provide evidence that various types of subjective well-being such as positive affect predict health and longevity, controlling for health and socioeconomic status at baseline. Combined with experimental human and animal research, as well as naturalistic studies of changes of subjective well-being and physiological processes over time, the case that subjective well-being influences health and longevity in healthy populations is compelling. However, the claim that subjective well-being lengthens the lives of those with certain diseases such as cancer remains controversial. Positive feelings predict longevity and health beyond negative feelings. However, intensely aroused or manic positive affect may be detrimental to health. Issues such as causality, effect size, types of subjective well-being, and statistical controls are discussed.", "title": "" }, { "docid": "c64d9727c98e8c5cdbb3445918eb32c7", "text": "This paper describes an industrial project aimed at migrating legacy COBOL programs running on an IBM-AS400 to Java for running in an open environment. The unique aspect of this migration is the reengineering of the COBOL code prior to migration. The programs were in their previous form hardwired to the AS400 screens as well as to the AS400 file system. The goal of the reengineering project was to free the code from these proprietary dependencies and to reduce them to the pure business logic. Disentangling legacy code from it's physical environment is a major prerequisite to converting that code to another environment. The goal is the virtualization of program interfaces. That was accomplished here in a multistep automated process which led to small, environment independent COBOL modules which could be readily converted over into Java packages. The pilot project has been completed for a sample subset of the production planning and control system. The conversion to Java is pending the test of the reengineered COBOL modules.", "title": "" }, { "docid": "6a1f1345a390ff886c95a57519535c40", "text": "BACKGROUND\nThe goal of this pilot study was to evaluate the effects of the cognitive-restructuring technique 'lucid dreaming treatment' (LDT) on chronic nightmares. Becoming lucid (realizing that one is dreaming) during a nightmare allows one to alter the nightmare storyline during the nightmare itself.\n\n\nMETHODS\nAfter having filled out a sleep and a posttraumatic stress disorder questionnaire, 23 nightmare sufferers were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucidity exercises. Participants filled out the same questionnaires 12 weeks after the intervention (follow-up).\n\n\nRESULTS\nAt follow-up the nightmare frequency of both treatment groups had decreased. There were no significant changes in sleep quality and posttraumatic stress disorder symptom severity. Lucidity was not necessary for a reduction in nightmare frequency.\n\n\nCONCLUSIONS\nLDT seems effective in reducing nightmare frequency, although the primary therapeutic component (i.e. exposure, mastery, or lucidity) remains unclear.", "title": "" }, { "docid": "092239f41a6e216411174e5ed9dceee2", "text": "In this paper, we propose a simple but effective specular highlight removal method using a single input image. Our method is based on a key observation the maximum fraction of the diffuse color component (so called maximum diffuse chromaticity in the literature) in local patches in color images changes smoothly. Using this property, we can estimate the maximum diffuse chromaticity values of the specular pixels by directly applying low-pass filter to the maximum fraction of the color components of the original image, such that the maximum diffuse chromaticity values can be propagated from the diffuse pixels to the specular pixels. The diffuse color at each pixel can then be computed as a nonlinear function of the estimated maximum diffuse chromaticity. Our method can be directly extended for multi-color surfaces if edge-preserving filters (e.g., bilateral filter) are used such that the smoothing can be guided by the maximum diffuse chromaticity. But maximum diffuse chromaticity is to be estimated. We thus present an approximation and demonstrate its effectiveness. Recent development in fast bilateral filtering techniques enables our method to run over 200× faster than the state-of-the-art on a standard CPU and differentiates our method from previous work.", "title": "" }, { "docid": "a49c8e6f222b661447d1de32e29d0f16", "text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.", "title": "" }, { "docid": "b32286014bb7105e62fba85a9aab9019", "text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.", "title": "" }, { "docid": "f1131f6f25601c32fefc09c38c7ad84b", "text": "We create a new online reduction of multiclass classification to binary classification for which training and prediction time scale logarithmically with the number of classes. We show that several simple techniques give rise to an algorithm which is superior to previous logarithmic time classification approaches while competing with one-against-all in space. The core construction is based on using a tree to select a small subset of labels with high recall, which are then scored using a one-against-some structure with high precision.", "title": "" }, { "docid": "1c90adf8ec68ff52e777b2041f8bf4c4", "text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.", "title": "" }, { "docid": "c955e63d5c5a30e18c008dcc51d1194b", "text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.", "title": "" }, { "docid": "02469f669769f5c9e2a9dc49cee20862", "text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.", "title": "" }, { "docid": "96471eda3162fa5bdac40220646e7697", "text": "A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.", "title": "" }, { "docid": "595e68cfcf7b2606f42f2ad5afb9713a", "text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.", "title": "" }, { "docid": "8869e69647a16278d7a2ac26316ec5d0", "text": "Despite significant progress, most existing visual dictionary learning methods rely on image descriptors alone or together with class labels. However, Web images are often associated with text data which may carry substantial information regarding image semantics, and may be exploited for visual dictionary learning. This paper explores this idea by leveraging relational information between image descriptors and textual words via co-clustering, in addition to information of image descriptors. Existing co-clustering methods are not optimal for this problem because they ignore the structure of image descriptors in the continuous space, which is crucial for capturing visual characteristics of images. We propose a novel Bayesian co-clustering model to jointly estimate the underlying distributions of the continuous image descriptors as well as the relationship between such distributions and the textual words through a unified Bayesian inference. Extensive experiments on image categorization and retrieval have validated the substantial value of the proposed joint modeling in improving visual dictionary learning, where our model shows superior performance over several recent methods.", "title": "" } ]
scidocsrr
6482a8af53ac20d4bd6148d63200ed3c
Design a novel electronic medical record system for regional clinics and health centers in China
[ { "docid": "8ae8cb422f0f79031b8e19e49b857356", "text": "CSCW as a field has been concerned since its early days with healthcare, studying how healthcare work is collaboratively and practically achieved and designing systems to support that work. Reviewing literature from the CSCW Journal and related conferences where CSCW work is published, we reflect on the contributions that have emerged from this work. The analysis illustrates a rich range of concepts and findings towards understanding the work of healthcare but the work on the larger policy level is lacking. We argue that this presents a number of challenges for CSCW research moving forward: in having a greater impact on larger-scale health IT projects; broadening the scope of settings and perspectives that are studied; and reflecting on the relevance of the traditional methods in this field - namely workplace studies - to meet these challenges.", "title": "" } ]
[ { "docid": "94784bc9f04dbe5b83c2a9f02e005825", "text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.", "title": "" }, { "docid": "3f88da8f70976c11bf5bab5f1d438d58", "text": "The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset.", "title": "" }, { "docid": "57fd4b59ffb27c35faa6a5ee80001756", "text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.", "title": "" }, { "docid": "0923e899e5d7091a6da240db21eefad2", "text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.", "title": "" }, { "docid": "ccfb2821c51a2fad5b34c3037497cb66", "text": "The next decade will see a deep transformation of industrial applications by big data analytics, machine learning and the internet of things. Industrial applications have a number of unique features, setting them apart from other domains. Central for many industrial applications in the internet of things is time series data generated by often hundreds or thousands of sensors at a high rate, e.g. by a turbine or a smart grid. In a first wave of applications this data is centrally collected and analyzed in Map-Reduce or streaming systems for condition monitoring, root cause analysis, or predictive maintenance. The next step is to shift from centralized analysis to distributed in-field or in situ analytics, e.g in smart cities or smart grids. The final step will be a distributed, partially autonomous decision making and learning in massively distributed environments. In this talk I will give an overview on Siemens’ journey through this transformation, highlight early successes, products and prototypes and point out future challenges on the way towards machine intelligence. I will also discuss architectural challenges for such systems from a Big Data point of view. Bio.Michael May is Head of the Technology Field Business Analytics & Monitoring at Siemens Corporate Technology, Munich, and responsible for eleven research groups in Europe, US, and Asia. Michael is driving research at Siemens in data analytics, machine learning and big data architectures. In the last two years he was responsible for creating the Sinalytics platform for Big Data applications across Siemens’ business. Before joining Siemens in 2013, Michael was Head of the Knowledge Discovery Department at the Fraunhofer Institute for Intelligent Analysis and Information Systems in Bonn, Germany. In cooperation with industry he developed Big Data Analytics applications in sectors ranging from telecommunication, automotive, and retail to finance and advertising. Between 2002 and 2009 Michael coordinated two Europe-wide Data Mining Research Networks (KDNet, KDubiq). He was local chair of ICML 2005, ILP 2005 and program chair of the ECML PKDD Industrial Track 2015. Michael did his PhD on machine discovery of causal relationships at the Graduate Programme for Cognitive Science at the University of Hamburg. Machine Learning Challenges at Amazon", "title": "" }, { "docid": "d07a10da23e0fc18b473f8a30adaebfb", "text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.", "title": "" }, { "docid": "89263084f29469d1c363da55c600a971", "text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.", "title": "" }, { "docid": "762855af09c1f80ec85d6de63223bc53", "text": "In this paper, we propose a framework for isolating text regions from natural scene images. The main algorithm has two functions: it generates text region candidates, and it verifies of the label of the candidates (text or non-text). The text region candidates are generated through a modified K-means clustering algorithm, which references texture features, edge information and color information. The candidate labels are then verified in a global sense by the Markov Random Field model where collinearity weight is added as long as most texts are aligned. The proposed method achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database.", "title": "" }, { "docid": "8e3f8fca93ca3106b83cf85d20c061ca", "text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "838701b64b27fe1d65bd23a124ebcef7", "text": "OBJECTIVES\nInternet can accelerate information exchange. Social networks are the most accessed especially Facebook. This kind of networks might create dependency with several negative consequences in people's life. The aim of this study was to assess potential association between Facebook dependence and poor sleep quality.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nA cross sectional study was performed enrolling undergraduate students of the Universidad Peruana de Ciencias Aplicadas, Lima, Peru. The Internet Addiction Questionnaire, adapted to the Facebook case, and the Pittsburgh Sleep Quality Index, were used. A global score of 6 or greater was defined as the cutoff to determine poor sleep quality. Generalized linear model were used to determine prevalence ratios (PR) and 95% confidence intervals (95%CI). A total of 418 students were analyzed; of them, 322 (77.0%) were women, with a mean age of 20.1 (SD: 2.5) years. Facebook dependence was found in 8.6% (95% CI: 5.9%-11.3%), whereas poor sleep quality was present in 55.0% (95% CI: 50.2%-59.8%). A significant association between Facebook dependence and poor sleep quality mainly explained by daytime dysfunction was found (PR = 1.31; IC95%: 1.04-1.67) after adjusting for age, sex and years in the faculty.\n\n\nCONCLUSIONS\nThere is a relationship between Facebook dependence and poor quality of sleep. More than half of students reported poor sleep quality. Strategies to moderate the use of this social network and to improve sleep quality in this population are needed.", "title": "" }, { "docid": "deed8b565b77f92d91170c001b512e96", "text": "We introduce a novel humanoid robotic platform designed to jointly address three central goals of humanoid robotics: 1) study the role of morphology in biped locomotion; 2) study full-body compliant physical human-robot interaction; 3) be robust while easy and fast to duplicate to facilitate experimentation. The taken approach relies on functional modeling of certain aspects of human morphology, optimizing materials and geometry, as well as on the use of 3D printing techniques. In this article, we focus on the presentation of the design of specific morphological parts related to biped locomotion: the hip, the thigh, the limb mesh and the knee. We present initial experiments showing properties of the robot when walking with the physical guidance of a human.", "title": "" }, { "docid": "122fe53f1e745480837a23b68e62540a", "text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.", "title": "" }, { "docid": "f060713abe9ada73c1c4521c5ca48ea9", "text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.", "title": "" }, { "docid": "391949a4c924c9f8e1986e4747e571c4", "text": "In this paper, we present Auto-Tuned Models, or ATM, a distributed, collaborative, scalable system for automated machine learning. Users of ATM can simply upload a dataset, choose a subset of modeling methods, and choose to use ATM's hybrid Bayesian and multi-armed bandit optimization system. The distributed system works in a load-balanced fashion to quickly deliver results in the form of ready-to-predict models, confusion matrices, cross-validation results, and training timings. By automating hyperparameter tuning and model selection, ATM returns the emphasis of the machine learning workflow to its most irreducible part: feature engineering. We demonstrate the usefulness of ATM on 420 datasets from OpenML and train over 3 million classifiers. Our initial results show ATM can beat human-generated solutions for 30% of the datasets, and can do so in 1/100th of the time.", "title": "" }, { "docid": "861f76c061b9eb52ed5033bdeb9a3ce5", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "76984b82e44f5790aa72f03f3804c588", "text": "LANGUAGE ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements. The system is currently embedded in an application for recommending laptops and was deployed as a pilot on IBM’s web site.", "title": "" }, { "docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251", "text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.", "title": "" }, { "docid": "8eb62d4fdc1be402cd9216352cb7cfc3", "text": "In an attempt to better understand generalization in deep learning, we study several possible explanations. We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models. Motivated by this view, we study how different complexity measures can ensure generalization and explain how optimization algorithms can implicitly regularize complexity measures. We empirically investigate the ability of these measures to explain different observed phenomena in deep learning. We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning tasks. Thesis Advisor: Nathan Srebro Title: Professor", "title": "" } ]
scidocsrr
32262952ce4d4b250f0be1985e087814
Runtime Prediction for Scale-Out Data Analytics
[ { "docid": "66f684ba92fe735fecfbfb53571bad5f", "text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.", "title": "" }, { "docid": "a50ec2ab9d5d313253c6656049d608b3", "text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.", "title": "" }, { "docid": "6c2a0afc5a93fe4d73661a3f50fab126", "text": "As massive data acquisition and storage becomes increasingly a↵ordable, a wide variety of enterprises are employing statisticians to engage in sophisticated data analysis. In this paper we highlight the emerging practice of Magnetic, Agile, Deep (MAD) data analysis as a radical departure from traditional Enterprise Data Warehouses and Business Intelligence. We present our design philosophy, techniques and experience providing MAD analytics for one of the world’s largest advertising networks at Fox Audience Network, using the Greenplum parallel database system. We describe database design methodologies that support the agile working style of analysts in these settings. We present dataparallel algorithms for sophisticated statistical techniques, with a focus on density methods. Finally, we reflect on database system features that enable agile design and flexible algorithm development using both SQL and MapReduce interfaces over a variety of storage mechanisms.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "2f9ebb8992542b8d342642b6ea361b54", "text": "Falsifying Financial Statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses, or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A Jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.", "title": "" }, { "docid": "e96b49a1ee9dd65bb920507d65810501", "text": "The objective of this paper is to compare the time specification performance between conventional controller PID and modern controller SMC for an inverted pendulum system. The goal is to determine which control strategy delivers better performance with respect to pendulum’s angle and cart’s position. The inverted pendulum represents a challenging control problem, which continually moves toward an uncontrolled state. Two controllers are presented such as Sliding Mode Control (SMC) and ProportionalIntegral-Derivatives (PID) controllers for controlling the highly nonlinear system of inverted pendulum model. Simulation study has been done in Matlab Mfile and simulink environment shows that both controllers are capable to control multi output inverted pendulum system successfully. The result shows that Sliding Mode Control (SMC) produced better response compared to PID control strategies and the responses are presented in time domain with the details analysis. Keywords—SMC, PID, Inverted Pendulum System.", "title": "" }, { "docid": "9f362249c508abe7f0146158d9370395", "text": "A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant. Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary.", "title": "" }, { "docid": "719b4c5352d94d5ae52172b3c8a2512d", "text": "Acts of violence account for an estimated 1.43 million deaths worldwide annually. While violence can occur in many contexts, individual acts of aggression account for the majority of instances. In some individuals, repetitive acts of aggression are grounded in an underlying neurobiological susceptibility that is just beginning to be understood. The failure of \"top-down\" control systems in the prefrontal cortex to modulate aggressive acts that are triggered by anger provoking stimuli appears to play an important role. An imbalance between prefrontal regulatory influences and hyper-responsivity of the amygdala and other limbic regions involved in affective evaluation are implicated. Insufficient serotonergic facilitation of \"top-down\" control, excessive catecholaminergic stimulation, and subcortical imbalances of glutamatergic/gabaminergic systems as well as pathology in neuropeptide systems involved in the regulation of affiliative behavior may contribute to abnormalities in this circuitry. Thus, pharmacological interventions such as mood stabilizers, which dampen limbic irritability, or selective serotonin reuptake inhibitors (SSRIs), which may enhance \"top-down\" control, as well as psychosocial interventions to develop alternative coping skills and reinforce reflective delays may be therapeutic.", "title": "" }, { "docid": "b57006686160241bf118c2c638971764", "text": "Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.", "title": "" }, { "docid": "55aa10937266b6f24157b87a9ecc6e34", "text": "For thousands of years, honey has been used for medicinal applications. The beneficial effects of honey, particularly its anti-microbial activity represent it as a useful option for management of various wounds. Honey contains major amounts of carbohydrates, lipids, amino acids, proteins, vitamin and minerals that have important roles in wound healing with minimum trauma during redressing. Because bees have different nutritional behavior and collect the nourishments from different and various plants, the produced honeys have different compositions. Thus different types of honey have different medicinal value leading to different effects on wound healing. This review clarifies the mechanisms and therapeutic properties of honey on wound healing. The mechanisms of action of honey in wound healing are majorly due to its hydrogen peroxide, high osmolality, acidity, non-peroxide factors, nitric oxide and phenols. Laboratory studies and clinical trials have shown that honey promotes autolytic debridement, stimulates growth of wound tissues and stimulates anti-inflammatory activities thus accelerates the wound healing processes. Compared with topical agents such as hydrofiber silver or silver sulfadiazine, honey is more effective in elimination of microbial contamination, reduction of wound area, promotion of re-epithelialization. In addition, honey improves the outcome of the wound healing by reducing the incidence and excessive scar formation. Therefore, application of honey can be an effective and economical approach in managing large and complicated wounds.", "title": "" }, { "docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f", "text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.", "title": "" }, { "docid": "d72f47ad136ebb9c74abe484980b212f", "text": "This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.", "title": "" }, { "docid": "3fb8519ca0de4871b105df5c5d8e489f", "text": "Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.", "title": "" }, { "docid": "5717a94b8dd53e42bc96c4e1444d5903", "text": "A spoken dialogue system (SDS) is a specialised form of computer system that operates as an interface between users and the application, using spoken natural language as the primary means of communication. The motivation for spoken interaction with such systems is that it allows for a natural and efficient means of communication. It is for this reason that the use of an SDS has been considered as a means for furthering development of DST Group’s Consensus project by providing an engaging spoken interface to high-level information fusion software. This document provides a general overview of the key issues surrounding the development of such interfaces.", "title": "" }, { "docid": "0870519536e7229f861323bd4a44c4d2", "text": "It has become increasingly common for websites and computer media to provide computer generated visual images, called avatars, to represent users and bots during online interactions. In this study, participants (N=255) evaluated a series of avatars in a static context in terms of their androgyny, anthropomorphism, credibility, homophily, attraction, and the likelihood they would choose them during an interaction. The responses to the images were consistent with what would be predicted by uncertainty reduction theory. The results show that the masculinity or femininity (lack of androgyny) of an avatar, as well as anthropomorphism, significantly influence perceptions of avatars. Further, more anthropomorphic avatars were perceived to be more attractive and credible, and people were more likely to choose to be represented by them. Participants reported masculine avatars as less attractive than feminine avatars, and most people reported a preference for human avatars that matched their gender. Practical and theoretical implications of these results for users, designers, and researchers of avatars are discussed.", "title": "" }, { "docid": "b30af7c9565effd44f433abc62e1ff14", "text": "Feedback on designs is critical for helping users iterate toward effective solutions. This paper presents Voyant, a novel system giving users access to a non-expert crowd to receive perception-oriented feedback on their designs from a selected audience. Based on a formative study, the system generates the elements seen in a design, the order in which elements are noticed, impressions formed when the design is first viewed, and interpretation of the design relative to guidelines in the domain and the user's stated goals. An evaluation of the system was conducted with users and their designs. Users reported the feedback about impressions and interpretation of their goals was most helpful, though the other feedback types were also valued. Users found the coordinated views in Voyant useful for analyzing relations between the crowd's perception of a design and the visual elements within it. The cost of generating the feedback was considered a reasonable tradeoff for not having to organize critiques or interrupt peers.", "title": "" }, { "docid": "96f4f77f114fec7eca22d0721c5efcbe", "text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.", "title": "" }, { "docid": "a88b5c0c627643e0d7b17649ac391859", "text": "Abduction is a useful decision problem that is related to diagnostics. Given some observation in form of a set of axioms, that is not entailed by a knowledge base, we are looking for explanations, sets of axioms, that can be added to the knowledge base in order to entail the observation. ABox abduction limits both observations and explanations to ABox assertions. In this work we focus on direct tableau-based approach to answer ABox abduction. We develop an ABox abduction algorithm for the ALCHO DL, that is based on Reiter’s minimal hitting set algorithm. We focus on the class of explanations allowing atomic and negated atomic concept assertions, role assertions, and negated role assertions. The algorithm is sound and complete for this class. The algorithm was also implemented, on top of the Pellet reasoner.", "title": "" }, { "docid": "f783860e569d9f179466977db544bd01", "text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.", "title": "" }, { "docid": "f14757e2e1d893b5cc0c7498f531d0e0", "text": "A new irradiation facility has been developed in the RA-3 reactor in order to perform trials for the treatment of liver metastases using boron neutron capture therapy (BNCT). RA-3 is a production research reactor that works continuously five days a week. It had a thermal column with a small cross section access tunnel that was not accessible during operation. The objective of the work was to perform the necessary modifications to obtain a facility for irradiating a portion of the human liver. This irradiation facility must be operated without disrupting the normal reactor schedule and requires a highly thermalized neutron spectrum, a thermal flux of around 10(10) n cm(-2)s(-1) that is as isotropic and uniform as possible, as well as on-line instrumentation. The main modifications consist of enlarging the access tunnel inside the thermal column to the suitable dimensions, reducing the gamma dose rate at the irradiation position, and constructing properly shielded entrance gates enabled by logical control to safely irradiate and withdraw samples with the reactor at full power. Activation foils and a neutron shielded graphite ionization chamber were used for a preliminary in-air characterization of the irradiation site. The constructed facility is very practical and easy to use. Operational authorization was obtained from radioprotection personnel after confirming radiation levels did not significantly increase after the modification. A highly thermalized and homogenous irradiation field was obtained. Measurements in the empty cavity showed a thermal flux near 10(10) n cm(-2)s(-1), a cadmium ratio of 4100 for gold foils and a gamma dose rate of approximately 5 Gy h(-1).", "title": "" }, { "docid": "799904b20f1174f01c0d2dd87c57e097", "text": "ix", "title": "" }, { "docid": "90c8deec8869977ac5e3feb9a6037569", "text": "Want to get experience? Want to get any ideas to create new things in your life? Read memory a contribution to experimental psychology now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.", "title": "" }, { "docid": "723bfb5acef53d78a05660e5d9710228", "text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.", "title": "" } ]
scidocsrr
85919d20abd30448a6b7840f8fadcbba
Active Learning of Pareto Fronts
[ { "docid": "3228d57f3d74f56444ce7fb9ed18e042", "text": "Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice.", "title": "" } ]
[ { "docid": "6bab9326dd38f25794525dc852ece818", "text": "The transformation from high level task speci cation to low level motion control is a fundamental issue in sensorimotor control in animals and robots. This thesis develops a control scheme called virtual model control which addresses this issue. Virtual model control is a motion control language which uses simulations of imagined mechanical components to create forces, which are applied through joint torques, thereby creating the illusion that the components are connected to the robot. Due to the intuitive nature of this technique, designing a virtual model controller requires the same skills as designing the mechanism itself. A high level control system can be cascaded with the low level virtual model controller to modulate the parameters of the virtual mechanisms. Discrete commands from the high level controller would then result in uid motion. An extension of Gardner's Partitioned Actuator Set Control method is developed. This method allows for the speci cation of constraints on the generalized forces which each serial path of a parallel mechanism can apply. Virtual model control has been applied to a bipedal walking robot. A simple algorithm utilizing a simple set of virtual components has successfully compelled the robot to walk eight consecutive steps. Thesis Supervisor: Gill A. Pratt Title: Assistant Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "2ce9d2923b6b8be5027e23fb905e8b4d", "text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.", "title": "" }, { "docid": "26140dbe32672dc138c46e7fd6f39b1a", "text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.", "title": "" }, { "docid": "f92f0a3d46eaf14e478a41f87b8ad369", "text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.", "title": "" }, { "docid": "67da4c8ba04d3911118147b829ba9c50", "text": "A methodology for the development of a fuzzy expert system (FES) with application to earthquake prediction is presented. The idea is to reproduce the performance of a human expert in earthquake prediction. To do this, at the first step, rules provided by the human expert are used to generate a fuzzy rule base. These rules are then fed into an inference engine to produce a fuzzy inference system (FIS) and to infer the results. In this paper, we have used a Sugeno type fuzzy inference system to build the FES. At the next step, the adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES parameters and improve its performance. The proposed framework is then employed to attain the performance of a human expert used to predict earthquakes in the Zagros area based on the idea of coupled earthquakes. While the prediction results are promising in parts of the testing set, the general performance indicates that prediction methodology based on coupled earthquakes needs more investigation and more complicated reasoning procedure to yield satisfactory predictions.", "title": "" }, { "docid": "d579ed125d3a051069b69f634fffe488", "text": "Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.", "title": "" }, { "docid": "1971cb1d7876256ecf0342d0a51fe7e7", "text": "Senescent cells accumulate with aging and at sites of pathology in multiple chronic diseases. Senolytics are drugs that selectively promote apoptosis of senescent cells by temporarily disabling the pro-survival pathways that enable senescent cells to resist the pro-apoptotic, pro-inflammatory factors that they themselves secrete. Reducing senescent cell burden by genetic approaches or by administering senolytics delays or alleviates multiple age- and disease-related adverse phenotypes in preclinical models. Reported senolytics include dasatinib, quercetin, navitoclax (ABT263), and piperlongumine. Here we report that fisetin, a naturally-occurring flavone with low toxicity, and A1331852 and A1155463, selective BCL-XL inhibitors that may have less hematological toxicity than the less specific BCL-2 family inhibitor navitoclax, are senolytic. Fisetin selectively induces apoptosis in senescent but not proliferating human umbilical vein endothelial cells (HUVECs). It is not senolytic in senescent IMR90 cells, a human lung fibroblast strain, or primary human preadipocytes. A1331852 and A1155463 are senolytic in HUVECs and IMR90 cells, but not preadipocytes. These agents may be better candidates for eventual translation into clinical interventions than some existing senolytics, such as navitoclax, which is associated with hematological toxicity.", "title": "" }, { "docid": "941dc605dab6cf9bfe89bedb2b4f00a3", "text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.", "title": "" }, { "docid": "c10ac9c3117627b2abb87e268f5de6b1", "text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.", "title": "" }, { "docid": "b07ea7995bb865b226f5834a54c70aa4", "text": "The explosive growth in the usage of IEEE 802.11 network has resulted in dense deployments in diverse environments. Most recently, the IEEE working group has triggered the IEEE 802.11ax project, which aims to amend the current IEEE 802.11 standard to improve efficiency of dense WLANs. In this paper, we evaluate the Dynamic Sensitivity Control (DSC) Algorithm proposed for IEEE 802.11ax. This algorithm dynamically adjusts the Carrier Sense Threshold (CST) based on the average received signal strength. We show that the aggregate throughput of a dense network utilizing DSC is considerably improved (i.e. up to 20%) when compared with the IEEE 802.11 legacy network.", "title": "" }, { "docid": "f20c0ace77f7b325d2ae4862d300d440", "text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: xlzheng@zju.edu.cn (X. Zheng), nblin@zju.edu.cn (Z. Lin), alexwang@zju.edu.cn (X. Wang), klin@ece.uci.edu (K.-J. Lin), mnsong@bupt.edu.cn (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e", "title": "" }, { "docid": "37ceb75634c9801e3f83c36a15dc879b", "text": "Semantic visualization integrates topic modeling and visualization, such that every document is associated with a topic distribution as well as visualization coordinates on a low-dimensional Euclidean space. We address the problem of semantic visualization for short texts. Such documents are increasingly common, including tweets, search snippets, news headlines, or status updates. Due to their short lengths, it is difficult to model semantics as the word co-occurrences in such a corpus are very sparse. Our approach is to incorporate auxiliary information, such as word embeddings from a larger corpus, to supplement the lack of co-occurrences. This requires the development of a novel semantic visualization model that seamlessly integrates visualization coordinates, topic distributions, and word vectors. We propose a model called GaussianSV, which outperforms pipelined baselines that derive topic models and visualization coordinates as disjoint steps, as well as semantic visualization baselines that do not consider word embeddings.", "title": "" }, { "docid": "09c209f1e36dc97458a8edc4a08e5351", "text": "We proposed neural network architecture based on Convolution Neural Network(CNN) for temporal relation classification in sentence. First, we transformed word into vector by using word embedding. In Feature Extraction, we extracted two type of features. Lexical level feature considered meaning of marked entity and Sentence level feature considered context of the sentence. Window processing was used to reflect local context and Convolution and Max-pooling operation were used for global context. We concatenated both feature vectors and used softmax operation to compute confidence score. Because experiment results didn't outperform the state-of-the-art methods, we suggested some future works to do.", "title": "" }, { "docid": "e23cebac640a47643b3a3249eae62f89", "text": "Objective: To assess the factors that contribute to impaired quinine clearance in acute falciparum malaria. Patients: Sixteen adult Thai patients with severe or moderately severe falciparum malaria were studied, and 12 were re-studied during convalescence. Methods: The clearance of quinine, dihydroquinine (an impurity comprising up to 10% of commercial quinine formulations), antipyrine (a measure of hepatic mixed-function oxidase activity), indocyanine green (ICG) (a measure of liver blood flow), and iothalamate (a measure of glomerular filtration rate) were measured simultaneously, and the relationship of these values to the␣biotransformation of quinine to the active metabolite 3-hydroxyquinine was assessed. Results: During acute malaria infection, the systemic clearance of quinine, antipyrine and ICG and the biotransformation of quinine to 3-hydroxyquinine were all reduced significantly when compared with values during convalescence. Iothalamate clearance was not affected significantly and did not correlate with the clearance of any of the other compounds. The clearance of total and free quinine correlated significantly with antipyrine clearance (r s = 0.70, P = 0.005 and r s = 0.67, P = 0.013, respectively), but not with ICG clearance (r s = 0.39 and 0.43 respectively, P > 0.15). In a multiple regression model, antipyrine clearance and plasma protein binding accounted for 71% of the variance in total quinine clearance in acute malaria. The pharmacokinetic properties of dihydroquinine were generally similar to those of quinine, although dihydroquinine clearance was less affected by acute malaria. The mean ratio of quinine to 3-hydroxyquinine area under the plasma concentration-time curve (AUC) values in acute malaria was 12.03 compared with 6.92 during convalescence P=0.01. The mean plasma protein binding of 3-hydroxyquinine was 46%, which was significantly lower than that of quinine (90.5%) or dihydroquinine (90.5%). Conclusion: The reduction in quinine clearance in acute malaria results predominantly from a disease-induced dysfunction in hepatic mixed-function oxidase activity (principally CYP 3A) which impairs the conversion of quinine to its major metabolite, 3-hydroxyquinine. The metabolite contributes approximately 5% of the antimalarial activity of the parent compound in malaria, but up to 10% during convalescence.", "title": "" }, { "docid": "48126a601f93eea84b157040c83f8861", "text": "Citation counts and intra-conference citations are one useful measure of the impact of prior research in a field. We have developed CiteVis, a visualization system for portraying citation data about the IEEE InfoVis Conference and its papers. Rather than use a node-link network visualization, we employ an attribute-based layout along with interaction to foster exploration and knowledge discovery.", "title": "" }, { "docid": "af7803b0061e75659f718d56ba9715b3", "text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.", "title": "" }, { "docid": "d40aa76e76c44da4c6237f654dcdab45", "text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.", "title": "" }, { "docid": "e13d6cd043ea958e9731c99a83b6de18", "text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.", "title": "" }, { "docid": "6ab8b5bd7ce3582df99d5601225c1779", "text": "Nowadays, the number of users, speed of internet and processing power of devices are increasing at a tremendous rate. For maintaining the balance between users and company networks with product or service, our system must evolve and modify to handle the future load of data. Currently, we are using file systems, Database servers and some semi-structured file systems. But all these systems are mostly independent, differ from each other in many except and never on the single roof for easy, effective use. So, to minimize the problems for developing apps, website, game development easier, Google came with the solution as their product Firebase. Firebase is implementing a real-time database, crash reporting, authentication, cloud functions, cloud storage, hosting, test-lab, performance monitoring and analytics on a single system platform for speed, security as well as efficiency. Systems like these are also developed by some big companies like Facebook, IBM, Linkedin, etc for their personal use. So we can say that Firebase will have the power to handle the future requirement.", "title": "" }, { "docid": "6476066913e37c88e94cc83c15b05f43", "text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.", "title": "" } ]
scidocsrr
3d1eb27f60fcf8f1d45261a55471eb48
Network Intrusion Detection Using Hybrid Simplified Swarm Optimization and Random Forest Algorithm on Nsl-Kdd Dataset
[ { "docid": "320c7c49dd4341cca532fa02965ef953", "text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.", "title": "" }, { "docid": "11a2882124e64bd6b2def197d9dc811a", "text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.", "title": "" }, { "docid": "7b05751aa3257263e7f1a8a6f1e2ff7e", "text": "Intrusion Detection System (IDS) that turns to be a vital component to secure the network. The lack of regular updation, less capability to detect unknown attacks, high non adaptable false alarm rate, more consumption of network resources etc., makes IDS to compromise. This paper aims to classify the NSL-KDD dataset with respect to their metric data by using the best six data mining classification algorithms like J48, ID3, CART, Bayes Net, Naïve Bayes and SVM to find which algorithm will be able to offer more testing accuracy. NSL-KDD dataset has solved some of the inherent limitations of the available KDD’99 dataset. KeywordsIDS, KDD, Classification Algorithms, PCA etc.", "title": "" }, { "docid": "305efd1823009fe79c9f8ff52ddb5724", "text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.", "title": "" }, { "docid": "035b2296835a9c4a7805ba446760071e", "text": "Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network. This paper proposes the development of an Intrusion Detection Program (IDP) which could detect known attack patterns. An IDP does not eliminate the use of any preventive mechanism but it works as the last defensive mechanism in securing the system. Three variants of genetic programming techniques namely Linear Genetic Programming (LGP), Multi-Expression Programming (MEP) and Gene Expression Programming (GEP) were evaluated to design IDP. Several indices are used for comparisons and a detailed analysis of MEP technique is provided. Empirical results reveal that genetic programming technique could play a major role in developing IDP, which are light weight and accurate when compared to some of the conventional intrusion detection systems based on machine learning paradigms.", "title": "" } ]
[ { "docid": "2518564949f7488a7f01dff74e3b6e2d", "text": "Although it is commonly believed that women are kinder and more cooperative than men, there is conflicting evidence for this assertion. Current theories of sex differences in social behavior suggest that it may be useful to examine in what situations men and women are likely to differ in cooperation. Here, we derive predictions from both sociocultural and evolutionary perspectives on context-specific sex differences in cooperation, and we conduct a unique meta-analytic study of 272 effect sizes-sampled across 50 years of research-on social dilemmas to examine several potential moderators. The overall average effect size is not statistically different from zero (d = -0.05), suggesting that men and women do not differ in their overall amounts of cooperation. However, the association between sex and cooperation is moderated by several key features of the social context: Male-male interactions are more cooperative than female-female interactions (d = 0.16), yet women cooperate more than men in mixed-sex interactions (d = -0.22). In repeated interactions, men are more cooperative than women. Women were more cooperative than men in larger groups and in more recent studies, but these differences disappeared after statistically controlling for several study characteristics. We discuss these results in the context of both sociocultural and evolutionary theories of sex differences, stress the need for an integrated biosocial approach, and outline directions for future research.", "title": "" }, { "docid": "6d411b994567b18ea8ab9c2b9622e7f5", "text": "Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so.", "title": "" }, { "docid": "4fa1054bd78a624f68a0f62840542457", "text": "The ReWalkTM powered exoskeleton assists thoracic level motor complete spinal cord injury patients who are paralyzed to walk again with an independent, functional, upright, reciprocating gait. We completed an evaluation of twelve such individuals with promising results. All subjects met basic criteria to be able to use the ReWalkTM - including items such as sufficient bone mineral density, leg passive range of motion, strength, body size and weight limits. All subjects received approximately the same number of training sessions. However there was a wide distribution in walking ability. Walking velocities ranged from under 0.1m/s to approximately 0.5m/s. This variability was not completely explained by injury level The remaining sources of that variability are not clear at present. This paper reports our preliminary analysis into how the walking kinematics differed across the subjects - as a first step to understand the possible contribution to the velocity range and determine if the subjects who did not walk as well could be taught to improve by mimicking the better walkers.", "title": "" }, { "docid": "cfea41d4bc6580c91ee27201360f8e17", "text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.", "title": "" }, { "docid": "73b150681d7de50ada8e046a3027085f", "text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.", "title": "" }, { "docid": "290796519b7757ce7ec0bf4d37290eed", "text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.", "title": "" }, { "docid": "10a33d5a75419519ce1177f6711b749c", "text": "Perianal fistulizing Crohn's disease has a major negative effect on patient quality of life and is a predictor of poor long-term outcomes. Factors involved in the pathogenesis of perianal fistulizing Crohn's disease include an increased production of transforming growth factor β, TNF and IL-13 in the inflammatory infiltrate that induce epithelial-to-mesenchymal transition and upregulation of matrix metalloproteinases, leading to tissue remodelling and fistula formation. Care of patients with perianal Crohn's disease requires a multidisciplinary approach. A complete assessment of fistula characteristics is the basis for optimal management and must include the clinical evaluation of fistula openings, endoscopic assessment of the presence of proctitis, and MRI to determine the anatomy of fistula tracts and presence of abscesses. Local injection of mesenchymal stem cells can induce remission in patients not responding to medical therapies, or to avoid the exposure to systemic immunosuppression in patients naive to biologics in the absence of active luminal disease. Surgery is still required in a high proportion of patients and should not be delayed when criteria for drug failure is met. In this Review, we provide an up-to-date overview on the pathogenesis and diagnosis of fistulizing Crohn's disease, as well as therapeutic strategies.", "title": "" }, { "docid": "872f556cb441d9c8976e2bf03ebd62ee", "text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.", "title": "" }, { "docid": "da4699d1e358bebc822b059b568916a8", "text": "An InterCloud is an interconnected global “cloud of clouds” that enables each cloud to tap into resources of other clouds. This is the earliest work to devise an agent-based InterCloud economic model for analyzing consumer-to-cloud and cloud-to-cloud interactions. While economic encounters between consumers and cloud providers are modeled as a many-to-many negotiation, economic encounters among clouds are modeled as a coalition game. To bolster many-to-many consumer-to-cloud negotiations, this work devises a novel interaction protocol and a novel negotiation strategy that is characterized by both 1) adaptive concession rate (ACR) and 2) minimally sufficient concession (MSC). Mathematical proofs show that agents adopting the ACR-MSC strategy negotiate optimally because they make minimum amounts of concession. By automatically controlling concession rates, empirical results show that the ACR-MSC strategy is efficient because it achieves significantly higher utilities than the fixed-concession-rate time-dependent strategy. To facilitate the formation of InterCloud coalitions, this work devises a novel four-stage cloud-to-cloud interaction protocol and a set of novel strategies for InterCloud agents. Mathematical proofs show that these InterCloud coalition formation strategies 1) converge to a subgame perfect equilibrium and 2) result in every cloud agent in an InterCloud coalition receiving a payoff that is equal to its Shapley value.", "title": "" }, { "docid": "838bd8a38f9d67d768a34183c72da07d", "text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.", "title": "" }, { "docid": "d7635b011cef61fe6487a823c0d09301", "text": "The present letter describes the design of an energy harvesting circuit on a one-sided directional flexible planar antenna. The circuit is composed of a flexible antenna with an impedance matching circuit, a resonant circuit, and a booster circuit for converting and boosting radio frequency power into a dc voltage. The proposed one-sided directional flexible antenna has a bottom floating metal layer that enables one-sided radiation and easy connection of the booster circuit to the metal layer. The simulated output dc voltage is 2.89 V for an input of 100 mV and a 50 Ω power source at 900 MHz, and power efficiency is 58.7% for 1.0 × 107 Ω load resistance.", "title": "" }, { "docid": "57e71550633cdb4a37d3fa270f0ad3a7", "text": "Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.", "title": "" }, { "docid": "88b0d223ccff042d20148abf79599102", "text": "Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a trade-off between transfer and interference. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994; Lopez-Paz & Ranzato, 2017). Continual learning assumes that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution. We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). Recently proposed techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by 1 ar X iv :1 81 0. 11 91 0v 1 [ cs .L G ] 2 9 O ct 2 01 8 Published as a conference paper at ICLR 2019 Stability – Plasticity Dilemma Stability – Plasticity Dilemma A. Transfer – Interference Trade-off Transfer Old Learning Current Learning Future Learning Sharing Sharing B. Transfer C. Interference ∂Li ∂θ ∂Lj ∂θ", "title": "" }, { "docid": "0307912d034d4cbfef7cafb79ea9f9b3", "text": "This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "b3f2c1736174eda75f7eedb3cee2a729", "text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.", "title": "" }, { "docid": "b4a784bb8eb714afc86f1eee4f0a20ed", "text": "Warthin tumor (papillary cystadenoma lymphomatosum) is a benign salivary gland tumor involving almost exclusively the parotid gland. The lip is a very unusual location for this type of tumor, which develops only rarely in minor salivary glands. The case of 42-year-old woman with Warthin tumor arising in minor salivary glands of the upper lip is reported.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "7e647cac9417bf70acd8c0b4ee0faa9b", "text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.", "title": "" }, { "docid": "1347e22f1b3afe4ce6cd40f25770a465", "text": "Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. They have been extensively used in many important practical scenarios, such as display advertising and content recommendation. A common practice estimates the unknown bandit parameters pertaining to each user independently. This unfortunately ignores dependency among users and thus leads to suboptimal solutions, especially for the applications that have strong social components.\n In this paper, we develop a collaborative contextual bandit algorithm, in which the adjacency graph among users is leveraged to share context and payoffs among neighboring users while online updating. We rigorously prove an improved upper regret bound of the proposed collaborative bandit algorithm comparing to conventional independent bandit algorithms. Extensive experiments on both synthetic and three large-scale real-world datasets verified the improvement of our proposed algorithm against several state-of-the-art contextual bandit algorithms.", "title": "" }, { "docid": "854bd77e534e0bb53953edb708c867b1", "text": "About 60-GHz millimeter wave (mmWave) unlicensed frequency band is considered as a key enabler for future multi-Gbps WLANs. IEEE 802.11ad (WiGig) standard has been ratified for 60-GHz wireless local area networks (WLANs) by only considering the use case of peer to peer (P2P) communication coordinated by a single WiGig access point (AP). However, due to 60-GHz fragile channel, multiple number of WiGig APs should be installed to fully cover a typical target environment. Nevertheless, the exhaustive search beamforming training and the maximum received power-based autonomous users association prevent WiGig APs from establishing optimal WiGig concurrent links using random access. In this paper, we formulate the problem of WiGig concurrent transmissions in random access scenarios as an optimization problem, and then we propose a greedy scheme based on (2.4/5 GHz) Wi-Fi/(60 GHz) WiGig coordination to find out a suboptimal solution for it. In the proposed WLAN, the wide coverage Wi-Fi band is used to provide the control signalling required for launching the high date rate WiGig concurrent links. Besides, statistical learning using Wi-Fi fingerprinting is utilized to estimate the suboptimal candidate AP along with its suboptimal beam direction for establishing the WiGig concurrent link without causing interference to the existing WiGig data links while maximizing the total system throughput. Numerical analysis confirms the high impact of the proposed Wi-Fi/WiGig coordinated WLAN.", "title": "" } ]
scidocsrr
9a55767aba9c03100f383feb17188a74
Isolated Swiss-Forward Three-Phase Rectifier With Resonant Reset
[ { "docid": "ee6461f83cee5fdf409a130d2cfb1839", "text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.", "title": "" } ]
[ { "docid": "fe8f31db9c3e8cbe9d69e146c40abb49", "text": "BACKGROUND\nRegular physical activity (PA) can be beneficial to pregnant women, however, many women do not adhere to current PA guidelines during the antenatal period. Patient and public involvement is essential when designing antenatal PA interventions in order to uncover the reasons for non-adherence and non-engagement with the behaviour, as well as determining what type of intervention would be acceptable. The aim of this research was to explore women's experiences of PA during a recent pregnancy, understand the barriers and determinants of antenatal PA and explore the acceptability of antenatal walking groups for further development.\n\n\nMETHODS\nSeven focus groups were undertaken with women who had given birth within the past five years. Focus groups were transcribed and analysed using a grounded theory approach. Relevant and related behaviour change techniques (BCTs), which could be applied to future interventions, were identified using the BCT taxonomy.\n\n\nRESULTS\nWomen's opinions and experiences of PA during pregnancy were categorised into biological/physical (including tiredness and morning sickness), psychological (fear of harm to baby and self-confidence) and social/environmental issues (including access to facilities). Although antenatal walking groups did not appear popular, women identified some factors which could encourage attendance (e.g. childcare provision) and some which could discourage attendance (e.g. walking being boring). It was clear that the personality of the walk leader would be extremely important in encouraging women to join a walking group and keep attending. Behaviour change technique categories identified as potential intervention components included social support and comparison of outcomes (e.g. considering pros and cons of behaviour).\n\n\nCONCLUSIONS\nWomen's experiences and views provided a range of considerations for future intervention development, including provision of childcare, involvement of a fun and engaging leader and a range of activities rather than just walking. These experiences and views relate closely to the Health Action Process Model which, along with BCTs, could be used to develop future interventions. The findings of this study emphasise the importance of involving the target population in intervention development and present the theoretical foundation for building an antenatal PA intervention to encourage women to be physically active throughout their pregnancies.", "title": "" }, { "docid": "f6ba46b72139f61cfb098656d71553ed", "text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.", "title": "" }, { "docid": "d92f9a08b608f895f004e69c7893f2f0", "text": "Although research has determined that reactive oxygen species (ROS) function as signaling molecules in plant development, the molecular mechanism by which ROS regulate plant growth is not well known. An aba overly sensitive mutant, abo8-1, which is defective in a pentatricopeptide repeat (PPR) protein responsible for the splicing of NAD4 intron 3 in mitochondrial complex I, accumulates more ROS in root tips than the wild type, and the ROS accumulation is further enhanced by ABA treatment. The ABO8 mutation reduces root meristem activity, which can be enhanced by ABA treatment and reversibly recovered by addition of certain concentrations of the reducing agent GSH. As indicated by low ProDR5:GUS expression, auxin accumulation/signaling was reduced in abo8-1. We also found that ABA inhibits the expression of PLETHORA1 (PLT1) and PLT2, and that root growth is more sensitive to ABA in the plt1 and plt2 mutants than in the wild type. The expression of PLT1 and PLT2 is significantly reduced in the abo8-1 mutant. Overexpression of PLT2 in an inducible system can largely rescue root apical meristem (RAM)-defective phenotype of abo8-1 with and without ABA treatment. These results suggest that ABA-promoted ROS in the mitochondria of root tips are important retrograde signals that regulate root meristem activity by controlling auxin accumulation/signaling and PLT expression in Arabidopsis.", "title": "" }, { "docid": "bc272e837f1071fabcc7056134bae784", "text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.", "title": "" }, { "docid": "f55c9ef1e60afd326bebbb619452fd97", "text": "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.", "title": "" }, { "docid": "42b6c55e48f58e3e894de84519cb6feb", "text": "What social value do Likes on Facebook hold? This research examines people’s attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which people’s friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.", "title": "" }, { "docid": "48fffb441a5e7f304554e6bdef6b659e", "text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.", "title": "" }, { "docid": "67136c5bd9277e0637393e9a131d7b53", "text": "BACKGROUND\nSynchronous written conversations (or \"chats\") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions.\n\n\nOBJECTIVE\nThe aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat.\n\n\nMETHODS\nA systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure.\n\n\nRESULTS\nA total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling).\n\n\nCONCLUSIONS\nFeasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.", "title": "" }, { "docid": "8f0b7554ff0d9f6bf0d1cf8579dc2893", "text": "Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.", "title": "" }, { "docid": "ccf7390abc2924e4d2136a2b82639115", "text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.", "title": "" }, { "docid": "e34815efa68cb1b7a269e436c838253d", "text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).", "title": "" }, { "docid": "e45c921effd9b5026f34ff738b63c48c", "text": "We consider the problem of weakly supervised learning for object localization. Given a collection of images with image-level annotations indicating the presence/absence of an object, our goal is to localize the object in each image. We propose a neural network architecture called the attention network for this problem. Given a set of candidate regions in an image, the attention network first computes an attention score on each candidate region in the image. Then these candidate regions are combined together with their attention scores to form a whole-image feature vector. This feature vector is used for classifying the image. The object localization is implicitly achieved via the attention scores on candidate regions. We demonstrate that our approach achieves superior performance on several benchmark datasets.", "title": "" }, { "docid": "db2553268fc3ccaddc3ec7077514655c", "text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.", "title": "" }, { "docid": "6e07085f81dc4f6892e0f2aba7a8dcdd", "text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.", "title": "" }, { "docid": "b5b7bef8ec2d38bb2821dc380a3a49bf", "text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.", "title": "" }, { "docid": "82779e315cf982b56ed14396603ae251", "text": "The selection of drain current, inversion coefficient, and channel length for each MOS device in an analog circuit results in significant tradeoffs in performance. The selection of inversion coefficient, which is a numerical measure of MOS inversion, enables design freely in weak, moderate, and strong inversion and facilitates optimum design. Here, channel width required for layout is easily found and implicitly considered in performance expressions. This paper gives hand expressions motivated by the EKV MOS model and measured data for MOS device performance, inclusive of velocity saturation and other small-geometry effects. A simple spreadsheet tool is then used to predict MOS device performance and map this into complete circuit performance. Tradeoffs and optimization of performance are illustrated by the design of three, 0.18-mum CMOS operational transconductance amplifiers optimized for DC, balanced, and AC performance. Measured performance shows significant tradeoffs in voltage gain, output resistance, transconductance bandwidth, input-referred flicker noise and offset voltage, and layout area.", "title": "" }, { "docid": "b49a8894277278256b6c1430bb4e4a91", "text": "In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.", "title": "" }, { "docid": "4d5e8e1c8942256088f1c5ef0e122c9f", "text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b23e141ca479abecab2b00f13141b9b3", "text": "The prediction of movement time in human-computer interfaces as undertaken using Fitts' law is reviewed. Techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented. Refinements include (1) the Shannon formulation for the index of task difficulty, (2) new interpretations of \"target width\" for twoand three-dimensional tasks, and (3) a technique for normalizing error rates across experimental factors . Finally, a detailed application example is developed showing the potential of Fitts' law to predict and compare the performance of user interfaces before designs are finalized.", "title": "" }, { "docid": "c034cb6e72bc023a60b54d0f8316045a", "text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s", "title": "" } ]
scidocsrr
a16f0041754899e1f6101f7b8a5d82a6
Agile Software Development Methodologies and Practices
[ { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" } ]
[ { "docid": "19f4100f2e1d5655edca03a269adf79a", "text": "OBJECTIVES\nTo assess the influence of conventional glass ionomer cement (GIC) vs resin-modified GIC (RMGIC) as a base material for novel, super-closed sandwich restorations (SCSR) and its effect on shrinkage-induced crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slottype tooth preparation was applied to 30 extracted maxillary molars (5 mm depth/5 mm buccolingual width). A modified sandwich restoration was used, in which the enamel/dentin bonding agent was applied first (Optibond FL, Kerr), followed by a Ketac Molar (3M ESPE)(group KM, n = 15) or Fuji II LC (GC) (group FJ, n = 15) base, leaving 2 mm for composite resin material (Miris 2, Coltène-Whaledent). Shrinkageinduced enamel cracks were tracked with photography and transillumination. Samples were loaded until fracture or to a maximum of 185,000 cycles under isometric chewing (5 H z), starting with a load of 200 N (5,000 X), followed by stages of 400, 600, 800, 1,000, 1,200, and 1,400 N at a maximum of 30,000 X each. Groups were compared using the life table survival analysis (α = .008, Bonferroni method).\n\n\nRESULTS\nGroup FJ showed the highest survival rate (40% intact specimens) but did not differ from group KM (20%) or traditional direct restorations (13%, previous data). SCSR generated less shrinkage-induced cracks. Most failures were re-restorable (above the cementoenamel junction [CEJ]).\n\n\nCONCLUSIONS\nInclusion of GIC/RMGIC bases under large direct SCSRs does not affect their fatigue strength but tends to decrease the shrinkage-induced crack propensity.\n\n\nCLINICAL SIGNIFICANCE\nThe use of GIC/ RMGIC bases and the SCSR is an easy way to minimize polymerization shrinkage stress in large MOD defects without weakening the restoration.", "title": "" }, { "docid": "4cb25adf48328e1e9d871940a97fdff2", "text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.", "title": "" }, { "docid": "88804c0fb16e507007983108811950dc", "text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.", "title": "" }, { "docid": "0513ce3971cb0e438598ea6766be19ff", "text": "This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.", "title": "" }, { "docid": "5e5e2d038ae29b4c79c79abe3d20ae40", "text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d3f43eef5e36eb7b078b010482bdb115", "text": "This study is aimed at constructing a correlative model between Internet addiction and mobile phone addiction; the aim is to analyse the correlation (if any) between the two traits and to discuss the influence confirming that the gender has difference on this fascinating topic; taking gender into account opens a new world of scientific study to us. The study collected 448 college students on an island as study subjects, with 61.2% males and 38.8% females. Moreover, this study issued Mobile Phone Addiction Scale and Internet Addiction Scale to conduct surveys on the participants and adopts the structural equation model (SEM) to process the collected data. According to the study result, (1) mobile phone addiction and Internet addiction are positively related; (2) female college students score higher than male ones in the aspect of mobile addiction. Lastly, this study proposes relevant suggestions to serve as a reference for schools, college students, and future studies based on the study results.", "title": "" }, { "docid": "a66b5b6dea68e5460b227af4caa14ef3", "text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.", "title": "" }, { "docid": "37d3bf208ee4e513a809fa94f93a2654", "text": "Unplanned use of fertilizers leads to inferior quality of crops. Excess of one nutrient can make it difficult for the plant to absorb the other nutrients. To deal with this problem, the quality of soil is tested using a PH sensor that indicates the percentage of macronutrients present in the soil. Conventional methods used to test soil quality, involve the use of Ion Selective Field Effect Transistors (ISFET), Ion Selective Electrode (ISE) and Optical Sensors as the sensing units which were found to be very expensive. The prototype design will allow sprinkling of fertilizers to take place in zones which are deficient in these macronutrients (Nitrogen, Phosphorous and Potassium), proving it to be a cost efficient and farmer-friendly automated fertilization unit. Cost of the proposed unit is found to be one-seventh of that of the present methods, making it affordable for farmers and also saves the manual labor. Initial analysis and intensive case studies conducted in farmland situated near Ambedkar Nagar, Sarjapur also revealed the use of above mechanism to be more prominent and verified through practical implementation and experimentation as it takes lesser time to analyze the nutrient content than the other methods which require soil testing. Sprinklers cover discrete zones in the field that automate fertilization and reduce the effort of farmers in the rural areas. This novel technique also has a fast response time as it enables real time, in-situ soil nutrient analysis, thereby maintaining proper soil pH level required for a particular crop, reducing potentially negative environmental impacts.", "title": "" }, { "docid": "20cbfe9c1d20bfd67bbcbf39641aa69a", "text": "The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.", "title": "" }, { "docid": "080032ded41edee2a26320e3b2afb123", "text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.", "title": "" }, { "docid": "af11d259a031d22f7ee595ee2a250136", "text": "Cellular networks today are designed for and operate in dedicated licensed spectrum. At the same time there are other spectrum usage authorization models for wireless communication, such as unlicensed spectrum or, as widely discussed currently but not yet implemented in practice, various forms of licensed shared spectrum. Hence, cellular technology as of today can only operate in a subset of the spectrum that is in principle available. Hence, a future wireless system may benefit from the ability to access also spectrum opportunities other than dedicated licensed spectrum. It is therefore important to identify which additional ways of authorizing spectrum usage are deemed to become relevant in the future and to analyze the resulting technical requirements. The implications of sharing spectrum between different technologies are analyzed in this paper, both from efficiency and technology neutrality perspective. Different known sharing techniques are outlined and their applicability to the relevant range of future spectrum regulatory regimes is discussed. Based on an assumed range of relevant (according to the views of the authors) future spectrum sharing scenarios, a toolbox of certain spectrum sharing techniques is proposed as the basis for the design of spectrum sharing related functionality in future mobile broadband systems.", "title": "" }, { "docid": "10d41334c88039e9d85ce6eb93cb9abf", "text": "nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF", "title": "" }, { "docid": "d763947e969ade3c54c18f0b792a0f7b", "text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.", "title": "" }, { "docid": "bc6cbf7da118c01d74914d58a71157ac", "text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.", "title": "" }, { "docid": "3a2729b235884bddc05dbdcb6a1c8fc9", "text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.", "title": "" }, { "docid": "950a6a611f1ceceeec49534c939b4e0f", "text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].", "title": "" }, { "docid": "a3ac978e59bdedc18c45d460dd8fc154", "text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.", "title": "" }, { "docid": "0feae39f7e557a65699f686d14f4cf0f", "text": "This paper describes the design of a multi-gigabit fiber-optic receiver with integrated large-area photo detectors for plastic optical fiber applications. An integrated 250 μm diameter non-SML NW/P-sub photo detector is adopted to allow efficient light coupling. The theory of applying a fully-differential pre-amplifier with a single-ended photo current is also examined and a super-Gm transimpedance amplifier has been proposed to drive a C PD of 14 pF to multi-gigahertz frequency. Both differential and common-mode operations of the proposed super-Gm transimpedance amplifier have been analyzed and a differential noise analysis is performed. A digitally-controlled linear equalizer is proposed to produce a slow-rising-slope frequency response to compensate for the photo detector up to 3 GHz. The proposed POF receiver consists of an illuminated signal photo detector, a shielded dummy photo detector, a super-Gm transimpedance amplifier, a variable-gain amplifier, a linear equalizer, a post amplifier, and an output driver. A test chip is fabricated in TSMC's 65 nm low-power CMOS process, and it consumes 50 mW of DC power (excluding the output driver) from a single 1.2 V supply. A bit-error rate of less than 10-12 has been measured at a data rate of 3.125 Gbps with a 670 nm VCSEL-based electro-optical transmitter.", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" }, { "docid": "6459493643eb7ff011fa0d8873382911", "text": "This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.", "title": "" } ]
scidocsrr
d2e63cfca2fea6b2e02ea3e37e6d077a
BLACKLISTED SPEAKER IDENTIFICATION USING TRIPLET NEURAL NETWORKS
[ { "docid": "c9ecb6ac5417b5fea04e5371e4250361", "text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "title": "" } ]
[ { "docid": "3d5fb6eff6d0d63c17ef69c8130d7a77", "text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.", "title": "" }, { "docid": "bea412d20a95c853fe06e7640acb9158", "text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "779d5380c72827043111d00510e32bfd", "text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.", "title": "" }, { "docid": "337b03633afacc96b443880ad996f013", "text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.", "title": "" }, { "docid": "452f71b953ddffad88cec65a4c7fbece", "text": "The password based authorization scheme for all available security systems can effortlessly be hacked by the hacker or a malicious user. One might not be able to guarantee that the person who is using the password is authentic or not. Only biometric systems are one which make offered automated authentication. There are very exceptional chances of losing the biometric identity, only if the accident of an individual may persists. Footprint based biometric system has been evaluated so far. In this paper a number of approaches of footprint recognition have been deliberated. General Terms Biometric pattern recognition, Image processing.", "title": "" }, { "docid": "8183fe0c103e2ddcab5b35549ed8629f", "text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.", "title": "" }, { "docid": "f4aa06f7782a22eeb5f30d0ad27eaff9", "text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.", "title": "" }, { "docid": "6f9ae554513bba3c685f86909e31645f", "text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.", "title": "" }, { "docid": "719c945e9f45371f8422648e0e81178f", "text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey", "title": "" }, { "docid": "5956e9399cfe817aa1ddec5553883bef", "text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "title": "" }, { "docid": "b72d0d187fe12d1f006c8e17834af60e", "text": "Pseudoangiomatous stromal hyperplasia (PASH) is a rare benign mesenchymal proliferative lesion of the breast. In this study, we aimed to show a case of PASH with mammographic and sonographic features, which fulfill the criteria for benign lesions and to define its recently discovered elastography findings. A 49-year-old premenopausal female presented with breast pain in our outpatient surgery clinic. In ultrasound images, a hypoechoic solid mass located at the 3 o'clock position in the periareolar region of the right breast was observed. Due to it was not detected on earlier mammographies, the patient underwent a tru-cut biopsy, although the mass fulfilled the criteria for benign lesions on mammography, ultrasound, and elastography. Elastography is a new technique differentiating between benign and malignant lesions. It is also useful to determine whether a biopsy is necessary or follow-up is sufficient.", "title": "" }, { "docid": "c851bad8a1f7c8526d144453b3f2aa4f", "text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.", "title": "" }, { "docid": "aefa4559fa6f8e0c046cd7e02d3e1b92", "text": "The concept of smart city is considered as the new engine for economic and social growths since it is supported by the rapid development of information and communication technologies. However, each technology not only brings its advantages, but also the challenges that cities have to face in order to implement it. So, this paper addresses two research questions : « What are the most important technologies that drive the development of smart cities ?» and « what are the challenges that cities will face when adopting these technologies ? » Relying on a literature review of studies published between 1990 and 2017, the ensuing results show that Artificial Intelligence and Internet of Things represent the most used technologies for smart cities. So, the focus of this paper will be on these two technologies by showing their advantages and their challenges.", "title": "" }, { "docid": "123a21d9913767e1a8d1d043f6feab01", "text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.", "title": "" }, { "docid": "ccc4994ba255084af5456925ba6c164e", "text": "This letter proposes a novel, small, printed monopole antenna for ultrawideband (UWB) applications with dual band-notch function. By cutting an inverted fork-shaped slit in the ground plane, additional resonance is excited, and hence much wider impedance bandwidth can be produced. To generate dual band-notch characteristics, we use a coupled inverted U-ring strip in the radiating patch. The measured results reveal that the presented dual band-notch monopole antenna offers a wide bandwidth with two notched bands, covering all the 5.2/5.8-GHz WLAN, 3.5/5.5-GHz WiMAX, and 4-GHz C-bands. The proposed antenna has a small size of 12<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{2}$</tex> </formula> or about <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.15 \\lambda \\times 0.25 \\lambda$</tex></formula> at 4.2 GHz (first resonance frequency), which has a size reduction of 28% with respect to the previous similar antenna. Simulated and measured results are presented to validate the usefulness of the proposed antenna structure UWB applications.", "title": "" }, { "docid": "e75ec4137b0c559a1c375d97993448b0", "text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.", "title": "" }, { "docid": "bd20bbe7deb2383b6253ec3f576dcf56", "text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "455c080ab112cd4f71a29ab84af019f5", "text": "We propose a novel image inpainting approach in which the exemplar and the sparse representation are combined together skillfully. In the process of image inpainting, often there will be such a situation: although the sum of squared differences (SSD) of exemplar patch is the smallest among all the candidate patches, there may be a noticeable visual discontinuity in the recovered image when using the exemplar patch to replace the target patch. In this case, we cleverly use the sparse representation of image over a redundant dictionary to recover the target patch, instead of using the exemplar patch to replace it, so that we can promptly prevent the occurrence and accumulation of errors, and obtain satisfied results. Experiments on a number of real and synthetic images demonstrate the effectiveness of proposed algorithm, and the recovered images can better meet the requirements of human vision.", "title": "" } ]
scidocsrr
534d8debd1364fafb2acd2fe01e62619
Cost-Efficient Strategies for Restraining Rumor Spreading in Mobile Social Networks
[ { "docid": "d056e5ea017eb3e5609dcc978e589158", "text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.", "title": "" } ]
[ { "docid": "37e644b7b2d47e6830e30ae191bc453c", "text": "Technological forecasting is now poised to respond to the emerging needs of private and public sector organizations in the highly competitive global environment. The history of the subject and its variant forms, including impact assessment, national foresight studies, roadmapping, and competitive technological intelligence, shows how it has responded to changing institutional motivations. Renewed focus on innovation, attention to science-based opportunities, and broad social and political factors will bring renewed attention to technological forecasting in industry, government, and academia. Promising new tools are anticipated, borrowing variously from fields such as political science, computer science, scientometrics, innovation management, and complexity science.  2001 Elsevier Science Inc. Introduction Technological forecasting—its purpose, methods, terminology, and uses—will be shaped in the future, as in the past, by the needs of corporations and government agencies.1 These have a continual pressing need to anticipate and cope with the direction and rate of technological change. The future of technological forecasting will also depend on the views of the public and their elected representatives about technological progress, economic competition, and the government’s role in technological development. In the context of this article, “technological forecasting” (TF) includes several new forms—for example, national foresight studies, roadmapping, and competitive technological intelligence—that have evolved to meet the changing demands of user institutions. It also encompasses technology assessment (TA) or social impact analysis, which emphasizes the downstream effects of technology’s invention, innovation, and evolution. VARY COATES is associated with the Institute for Technology Assessment, Washington, DC. MAHMUD FAROQUE is with George Mason University, Fairfax, VA. RICHARD KLAVANS is with CRP, Philadelphia, PA. KOTY LAPID is with Softblock, Beer Sheba, Israel. HAROLD LINSTONE is with Portland State University. CARL PISTORIUS is with the University of Pretoria, South Africa. ALAN PORTER is with the Georgia Institute of Technology, Atlanta, GA. We also thank Joseph Coates and Joseph Martino for helpful critiques. 1 The term “technological forecasting” is used in this article to apply to all purposeful and systematic attempts to anticipate and understand the potential direction, rate, characteristics, and effects of technological change, especially invention, innovation, adoption, and use. No distinction is intended between “technological forecasting” “technology forecasting,” or “technology foresight,” except as specifically described in the text. Technological Forecasting and Social Change 67, 1–17 (2001)  2001 Elsevier Science Inc. All rights reserved. 0040-1625/01/$–see front matter 655 Avenue of the Americas, New York, NY 10010 PII S0040-1625(00)00122-0", "title": "" }, { "docid": "fdbb5f67eb2f9b651c0d2e1cf8077923", "text": "The periodical maintenance of railway systems is very important in terms of maintaining safe and comfortable transportation. In particular, the monitoring and diagnosis of faults in the pantograph catenary system are required to provide a transmission from the catenary line to the electric energy locomotive. Surface wear that is caused by the interaction between the pantograph and catenary and nonuniform distribution on the surface of a pantograph of the contact points can cause serious accidents. In this paper, a novel approach is proposed for image processing-based monitoring and fault diagnosis in terms of the interaction and contact points between the pantograph and catenary in a moving train. For this purpose, the proposed method consists of two stages. In the first stage, the pantograph catenary interaction has been modeled; the simulation results were given a failure analysis with a variety of scenarios. In the second stage, the contact points between the pantograph and catenary were detected and implemented in real time with image processing algorithms using actual video images. The pantograph surface for a fault analysis was divided into three regions: safe, dangerous, and fault. The fault analysis of the system was presented using the number of contact points in each region. The experimental results demonstrate the effectiveness, applicability, and performance of the proposed approach.", "title": "" }, { "docid": "7c5ce3005c4529e0c34220c538412a26", "text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.", "title": "" }, { "docid": "062d366387e6161ba6faadc32c53e820", "text": "Image processing has been proved to be effective tool for analysis in various fields and applications. Agriculture sector where the parameters like canopy, yield, quality of product were the important measures from the farmers&apos; point of view. Many times expert advice may not be affordable, majority times the availability of expert and their services may consume time. Image processing along with availability of communication network can change the situation of getting the expert advice well within time and at affordable cost since image processing was the effective tool for analysis of parameters. This paper intends to focus on the survey of application of image processing in agriculture field such as imaging techniques, weed detection and fruit grading. The analysis of the parameters has proved to be accurate and less time consuming as compared to traditional methods. Application of image processing can improve decision making for vegetation measurement, irrigation, fruit sorting, etc.", "title": "" }, { "docid": "7dfbb5e01383b5f50dbeb87d55ceb719", "text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fcf8649ff7c2972e6ef73f837a3d3f4d", "text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.", "title": "" }, { "docid": "2dfad4f4b0d69085341dfb64d6b37d54", "text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.", "title": "" }, { "docid": "3654827519075eac6bfe5ee442c6d4b2", "text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.", "title": "" }, { "docid": "7843fb4bbf2e94a30c18b359076899ab", "text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.", "title": "" }, { "docid": "7e127a6f25e932a67f333679b0d99567", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "627aee14031293785224efdb7bac69f0", "text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.", "title": "" }, { "docid": "94b84ed0bb69b6c4fc7a268176146eea", "text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.", "title": "" }, { "docid": "2e5ce96ba3c503704a9152ae667c24ec", "text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).", "title": "" }, { "docid": "fa42192f3ffd08332e35b98019e622ff", "text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" }, { "docid": "a757624e5fd2d4a364f484d55a430702", "text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.", "title": "" }, { "docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b", "text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.", "title": "" }, { "docid": "6524efda795834105bae7d65caf15c53", "text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.", "title": "" }, { "docid": "1168c9e6ce258851b15b7e689f60e218", "text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).", "title": "" }, { "docid": "71efff25f494a8b7a83099e7bdd9d9a8", "text": "Background: Problems with intubation of the ampulla Vateri during diagnostic and therapeutic endoscopic maneuvers are a well-known feature. The ampulla Vateri was analyzed three-dimensionally to determine whether these difficulties have a structural background. Methods: Thirty-five human greater duodenal papillae were examined by light and scanning electron microscopy as well as immunohistochemically. Results: Histologically, highly vascularized finger-like mucosal folds project far into the lumen of the ampulla Vateri. The excretory ducts of seromucous glands containing many lysozyme-secreting Paneth cells open close to the base of the mucosal folds. Scanning electron microscopy revealed large mucosal folds inside the ampulla that continued into the pancreatic and bile duct, comparable to valves arranged in a row. Conclusions: Mucosal folds form pocket-like valves in the lumen of the ampulla Vateri. They allow a unidirectional flow of secretions into the duodenum and prevent reflux from the duodenum into the ampulla Vateri. Subepithelial mucous gland secretions functionally clean the valvular crypts and protect the epithelium. The arrangement of pocket-like mucosal folds may explain endoscopic difficulties experienced when attempting to penetrate the papilla of Vater during endoscopic retrograde cholangiopancreaticographic procedures.", "title": "" } ]
scidocsrr
3cff79c9c9419de7a4a231917714c1e5
Design of Secure and Lightweight Authentication Protocol for Wearable Devices Environment
[ { "docid": "a85d07ae3f19a0752f724b39df5eca2b", "text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.", "title": "" } ]
[ { "docid": "f478bbf48161da50017d3ec9f8e677b4", "text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.", "title": "" }, { "docid": "62a51c43d4972d41d3b6cdfa23f07bb9", "text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.", "title": "" }, { "docid": "6d97cbe726eca4b883cf7c8c2d939f8b", "text": "In this paper, a new ensemble forecasting model for short-term load forecasting (STLF) is proposed based on extreme learning machine (ELM). Four important improvements are used to support the ELM for increased forecasting performance. First, a novel wavelet-based ensemble scheme is carried out to generate the individual ELM-based forecasters. Second, a hybrid learning algorithm blending ELM and the Levenberg-Marquardt method is proposed to improve the learning accuracy of neural networks. Third, a feature selection method based on the conditional mutual information is developed to select a compact set of input variables for the forecasting model. Fourth, to realize an accurate ensemble forecast, partial least squares regression is utilized as a combining approach to aggregate the individual forecasts. Numerical testing shows that proposed method can obtain better forecasting results in comparison with other standard and state-of-the-art methods.", "title": "" }, { "docid": "cbb6bac245862ed0265f6d32e182df92", "text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.", "title": "" }, { "docid": "bf4776d6d01d63d3eb6dbeba693bf3de", "text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.", "title": "" }, { "docid": "71b0dbd905c2a9f4111dfc097bfa6c67", "text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One", "title": "" }, { "docid": "a5d100fd83620d9cc868a33ab6367be2", "text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.", "title": "" }, { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "6513c4ca4197e9ff7028e527a621df0a", "text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.", "title": "" }, { "docid": "4243f0bafe669ab862aaad2b184c6a0e", "text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.", "title": "" }, { "docid": "6737955fd1876a40fc0e662a4cac0711", "text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.", "title": "" }, { "docid": "289942ca889ccea58d5b01dab5c82719", "text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.", "title": "" }, { "docid": "45009303764570cbfa3532a9d98f5393", "text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.", "title": "" }, { "docid": "0e7da1ef24306eea2e8f1193301458fe", "text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "e830098f9c045d376177e6d2644d4a06", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "cae9e77074db114690a6ed1330d9b14c", "text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.", "title": "" }, { "docid": "180a840a22191da6e9a99af3d41ab288", "text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.", "title": "" }, { "docid": "6a82dfa1d79016388c38ccba77c56ae5", "text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.", "title": "" }, { "docid": "bb799a3aac27f4ac764649e1f58ee9fb", "text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.", "title": "" }, { "docid": "97adb3a003347f579706cd01a762bdc9", "text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.", "title": "" } ]
scidocsrr
37daee87cefd6eabae129bc0df7338dd
Blockchain distributed ledger technologies for biomedical and health care applications
[ { "docid": "9e65315d4e241dc8d4ea777247f7c733", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "8780b620d228498447c4f1a939fa5486", "text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.", "title": "" } ]
[ { "docid": "91c0bd1c3faabc260277c407b7c6af59", "text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.", "title": "" }, { "docid": "45a098c09a3803271f218fafd4d951cd", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" }, { "docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05", "text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.", "title": "" }, { "docid": "96363ec5134359b5bf7c8b67f67971db", "text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.", "title": "" }, { "docid": "6b19d08c9aa6ecfec27452a298353e1f", "text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.", "title": "" }, { "docid": "1d11060907f0a2c856fdda9152b107e5", "text": "NOTICE This report was prepared by Columbia University in the course of performing work contracted for and sponsored by the New York State Energy Research and Development Authority (hereafter \" NYSERDA \"). The opinions expressed in this report do not necessarily reflect those of NYSERDA or the State of New York, and reference to any specific product, service, process, or method does not constitute an implied or expressed recommendation or endorsement of it. Further, NYSERDA, the State of New York, and the contractor make no warranties or representations, expressed or implied, as to the fitness for particular purpose or merchantability of any product, apparatus, or service, or the usefulness, completeness, or accuracy of any processes, methods, or other information contained, described, disclosed, or referred to in this report. NYSERDA, the State of New York, and the contractor make no representation that the use of any product, apparatus, process, method, or other information will not infringe privately owned rights and will assume no liability for any loss, injury, or damage resulting from, or occurring in connection with, the use of information contained, described, disclosed, or referred to in this report. iii ABSTRACT A research project was conducted to develop a concrete material that contains recycled waste glass and reprocessed carpet fibers and would be suitable for precast concrete wall panels. Post-consumer glass and used carpets constitute major solid waste components. Therefore their beneficial use will reduce the pressure on scarce landfills and the associated costs to taxpayers. By identifying and utilizing the special properties of these recycled materials, it is also possible to produce concrete elements with improved esthetic and thermal insulation properties. Using recycled waste glass as substitute for natural aggregate in commodity products such as precast basement wall panels brings only modest economic benefits at best, because sand, gravel, and crushed stone are fairly inexpensive. However, if the esthetic properties of the glass are properly exploited, such as in building façade elements with architectural finishes, the resulting concrete panels can compete very effectively with other building materials such as natural stone. As for recycled carpet fibers, the intent of this project was to exploit their thermal properties in order to increase the thermal insulation of concrete wall panels. In this regard, only partial success was achieved, because commercially reprocessed carpet fibers improve the thermal properties of concrete only marginally, as compared with other methods, such as the use of …", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "2c39f8c440a89f72db8814e633cb5c04", "text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.", "title": "" }, { "docid": "b2f1ec4d8ac0a8447831df4287271c35", "text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.", "title": "" }, { "docid": "5c05ad44ac2bf3fb26cea62d563435f8", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "c4387f3c791acc54d0a0655221947c8b", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "31c0dc8f0a839da9260bb9876f635702", "text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.", "title": "" }, { "docid": "7f6b4a74f88d5ae1a4d21948aac2e260", "text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.", "title": "" }, { "docid": "a81e4507632505b64f4839a1a23fa440", "text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.", "title": "" }, { "docid": "45f1964932b06f23b7b0556bfb4d2d24", "text": "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.", "title": "" }, { "docid": "66cde02bdf134923ca7ef3ec5c4f0fb8", "text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.", "title": "" }, { "docid": "7eea90d85df0245eac0de51702efdbfd", "text": "Mobile wellness application is widely used for assisting self-monitoring practice to monitor user's daily food intake and physical activities. Although these mostly free downloadable mobile application is easy to use and covers many aspects of wellness routines, there is no proof of prolonged use. Previous research reported that user will stop using the application and turned back into their old attitude of food consumptions. The purpose of this study is to examine the factors that influence the continuance intention to adopt a mobile phone wellness application. Review of Information System Continuance Model in the areas such as mobile health, mobile phone wellness application, social network and web 2.0, were done to examine the existing factors. From the critical review, two external factors namely Social Norm and Perceive Interactivity is believed to have the ability to explain the social perspective behavior and also the effect of perceiving interactivity towards prolong usage of wellness mobile application. These findings contribute to the development of the Mobile Phones Wellness Application Continuance Use theoretical model.", "title": "" }, { "docid": "3cdca28361b7c2b9525b476e9073fc10", "text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.", "title": "" }, { "docid": "89e88b92adc44176f0112a66ec92515a", "text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.", "title": "" }, { "docid": "e8215231e8eb26241d5ac8ac5be4b782", "text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.", "title": "" } ]
scidocsrr
5e7c2be0d66e726a1d4bd7d249df0187
Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy.
[ { "docid": "32b5458ced294a01654f3747273db08d", "text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).", "title": "" } ]
[ { "docid": "d364aaa161cc92e28697988012c35c2a", "text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "ca807d3bed994a8e7492898e6bfe6dd2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "1bf43801d05551f376464d08893b211c", "text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.", "title": "" }, { "docid": "e85e8b54351247d5f20bf1756a133a08", "text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .", "title": "" }, { "docid": "7e38ba11e394acd7d5f62d6a11253075", "text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.", "title": "" }, { "docid": "b5cc41f689a1792b544ac66a82152993", "text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "174fb8b7cb0f45bed49a50ce5ad19c88", "text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: haiqiu@uwm.edu (H. Qiu), jaylee@uwm.edu (J. Lee), jinglin@mail.ioc.ac.cn (J. Lin).", "title": "" }, { "docid": "63f20dd528d54066ed0f189e4c435fe7", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "363a465d626fec38555563722ae92bb1", "text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.", "title": "" }, { "docid": "3dfb419706ae85d232753a085dc145f7", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "48eacd86c14439454525e5a570db083d", "text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.", "title": "" }, { "docid": "3f6cbad208a819fc8fc6a46208197d59", "text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "07348109c7838032850c039f9a463943", "text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.", "title": "" }, { "docid": "affa48f455d5949564302b4c23324458", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "2795c78d2e81a064173f49887c9b1bb1", "text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.", "title": "" }, { "docid": "fd7c514e8681a5292bcbf2bbf6e75664", "text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,", "title": "" } ]
scidocsrr
f279df399f50407436670d9821df0891
Training with Exploration Improves a Greedy Stack LSTM Parser
[ { "docid": "b5f7511566b902bc206228dc3214c211", "text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.", "title": "" } ]
[ { "docid": "73270e8140d763510d97f7bd2fdd969e", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "a0db56f55e2d291cb7cf871c064cf693", "text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.", "title": "" }, { "docid": "fe0120f7d74ad63dbee9c3cd5ff81e6f", "text": "Background: Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method: In this study we perform a systematic review studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the", "title": "" }, { "docid": "4e8040c9336cf7d847d938b905f8f81d", "text": "Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.", "title": "" }, { "docid": "f5a934dc200b27747d3452f5a14c24e5", "text": "Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).", "title": "" }, { "docid": "da986950f6bbad36de5e9cc55d04e798", "text": "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.", "title": "" }, { "docid": "d1f02e2f57cffbc17387de37506fddc9", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "b0b2c4c321b5607cd6ebda817258921d", "text": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets.", "title": "" }, { "docid": "0f9ef379901c686df08dd0d1bb187e22", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" }, { "docid": "1348ee3316643f4269311b602b71d499", "text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.", "title": "" }, { "docid": "49717f07b8b4a3da892c1bb899f7a464", "text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.", "title": "" }, { "docid": "6421979368a138e4b21ab7d9602325ff", "text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.", "title": "" }, { "docid": "d76b7b25bce29cdac24015f8fa8ee5bb", "text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.", "title": "" }, { "docid": "3fa30df910c964bb2bf27a885aa59495", "text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.", "title": "" }, { "docid": "5b07bc318cb0f5dd7424cdcc59290d31", "text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.", "title": "" }, { "docid": "ae3d959972d673d24e6d0b7a0567323e", "text": "Traditional data on influenza vaccination has several limitations: high cost, limited coverage of underrepresented groups, and low sensitivity to emerging public health issues. Social media, such as Twitter, provide an alternative way to understand a population’s vaccination-related opinions and behaviors. In this study, we build and employ several natural language classifiers to examine and analyze behavioral patterns regarding influenza vaccination in Twitter across three dimensions: temporality (by week and month), geography (by US region), and demography (by gender). Our best results are highly correlated official government data, with a correlation over 0.90, providing validation of our approach. We then suggest a number of directions for future work.", "title": "" }, { "docid": "ff4c069ab63ced5979cf6718eec30654", "text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.", "title": "" }, { "docid": "21925b0a193ebb3df25c676d8683d895", "text": "The use of dialogue systems in vehicles raises the problem of making sure that the dialogue does not distract the driver from the primary task of driving. Earlier studies have indicated that humans are very apt at adapting the dialogue to the traffic situation and the cognitive load of the driver. The goal of this paper is to investigate strategies for interrupting and resuming in, as well as changing topic domain of, spoken human-human in-vehicle dialogue. The results show a large variety of strategies being used, and indicate that the choice of resumption and domain-switching strategy depends partly on the topic domain being resumed, and partly on the role of the speaker (driver or passenger). These results will be used as a basis for the development of dialogue strategies for interruption, resumption and domain-switching in the DICO in-vehicle dialogue system.", "title": "" }, { "docid": "58f1ba92eb199f4d105bf262b30dbbc5", "text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" } ]
scidocsrr
bac3f7c9d829ac0a042e0b35e95ff424
Type-2 fuzzy logic systems for temperature evaluation in ladle furnace
[ { "docid": "fdbca2e02ac52afd687331048ddee7d3", "text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.", "title": "" }, { "docid": "c4ccb674a07ba15417f09b81c1255ba8", "text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.", "title": "" }, { "docid": "20f43c14feaf2da1e8999403bf350855", "text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "e3f4add37a083f61feda8805478d0729", "text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.", "title": "" }, { "docid": "d9c514f3e1089f258732eef4a949fe55", "text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.", "title": "" }, { "docid": "2923ea4e17567b06b9d8e0e9f1650e55", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "bcd47a79eeb49a34253d3c0de236f768", "text": "This is the second of five papers in the child survival series. The first focused on continuing high rates of child mortality (over 10 million each year) from preventable causes: diarrhoea, pneumonia, measles, malaria, HIV/AIDS, the underlying cause of undernutrition, and a small group of causes leading to neonatal deaths. We review child survival interventions feasible for delivery at high coverage in low-income settings, and classify these as level 1 (sufficient evidence of effect), level 2 (limited evidence), or level 3 (inadequate evidence). Our results show that at least one level-1 intervention is available for preventing or treating each main cause of death among children younger than 5 years, apart from birth asphyxia, for which a level-2 intervention is available. There is also limited evidence for several other interventions. However, global coverage for most interventions is below 50%. If level 1 or 2 interventions were universally available, 63% of child deaths could be prevented. These findings show that the interventions needed to achieve the millennium development goal of reducing child mortality by two-thirds by 2015 are available, but that they are not being delivered to the mothers and children who need them.", "title": "" }, { "docid": "8d104169f3862bc7c54d5932024ed9f6", "text": "Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.", "title": "" }, { "docid": "77e2aac8b42b0b9263278280d867cb40", "text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.", "title": "" }, { "docid": "8c575ae46ac2969c19a841c7d9a8cb5a", "text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.", "title": "" }, { "docid": "87cfc5cad31751fd89c68dc9557eb33f", "text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "0b0b313c16697e303522fef245d97ba8", "text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.", "title": "" }, { "docid": "80a86ff7e26bb29cf919b22433f8b6b4", "text": "Despite the widespread acceptance and use of pornography, much remains unknown about the heterogeneity among consumers of pornography. Using a sample of 457 college students from a midwestern university in the United States, a latent profile analysis was conducted to identify unique classifications of pornography users considering motivations of pornography use, level of pornography use, age of user, degree of pornography acceptance, and religiosity. Results indicated three classes of pornography users: Porn Abstainers (n 1⁄4 285), Auto-Erotic Porn Users (n 1⁄4 85), and Complex Porn Users (n 1⁄4 87). These three classes of pornography use are carefully defined. The odds of membership in these three unique classes of pornography users was significantly distinguished by relationship status, selfesteem, and gender. These results expand what is known about pornography users by providing a more person-centered approach that is more nuanced in understanding pornography use. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit", "title": "" }, { "docid": "5c88fae140f343ae3002685ab96fd848", "text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.", "title": "" }, { "docid": "5c31ed81a9c8d6463ce93890e38ad7b5", "text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.", "title": "" }, { "docid": "1efeab8c3036ad5ec1b4dc63a857b392", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "efe74721de3eda130957ce26435375a3", "text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.", "title": "" }, { "docid": "a81e4b95dfaa7887f66066343506d35f", "text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.", "title": "" }, { "docid": "d80fc668073878c476bdf3997b108978", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" }, { "docid": "d8fc5a8bc075343b2e70a9b441ecf6e5", "text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.", "title": "" }, { "docid": "8c1e70cf4173f9fc48f36c3e94216f15", "text": "Deep learning methods often require large annotated data sets to estimate their high numbers of parameters, which is not practical for many robotic domains. One way to migitate this issue is to transfer features learned on large datasets to related tasks. In this work, we describe the perception system developed for the entry of team NimbRo Picking into the Amazon Picking Challenge 2016. Object detection and semantic Segmentation methods are adapted to the domain, including incorporation of depth measurements. To avoid the need for large training datasets, we make use of pretrained models whenever possible, e.g. CNNs pretrained on ImageNet, and the whole DenseCap captioning pipeline pretrained on the Visual Genome Dataset. Our system performed well at the APC 2016 and reached second and third places for the stow and pick tasks, respectively.", "title": "" }, { "docid": "1a8662362e51a8783795e4588f0462a8", "text": "Human body exposure to radiofrequency electromagnetic waves emitted from smart meters was assessed using various exposure configurations. Specific energy absorption rate distributions were determined using three anatomically realistic human models. Each model was assigned with age- and frequency-dependent dielectric properties representing a collection of age groups. Generalized exposure conditions involving standing and sleeping postures were assessed for a home area network operating at 868 and 2,450 MHz. The smart meter antenna was fed with 1 W power input which is an overestimation of what real devices typically emit (15 mW max limit). The highest observed whole body specific energy absorption rate value was 1.87 mW kg-1 , within the child model at a distance of 15 cm from a 2,450 MHz device. The higher values were attributed to differences in dimension and dielectric properties within the model. Specific absorption rate (SAR) values were also estimated based on power density levels derived from electric field strength measurements made at various distances from smart meter devices. All the calculated SAR values were found to be very small in comparison to International Commission on Non-Ionizing Radiation Protection limits for public exposure. Bioelectromagnetics. 39:200-216, 2018. © 2017 Wiley Periodicals, Inc.", "title": "" } ]
scidocsrr
103788d6f36997cc1e6cd103155e537d
A survey of data mining techniques for analyzing crime patterns
[ { "docid": "f074965ee3a1d6122f1e68f49fd11d84", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "bbdb4a930ef77f91e8d76dd3a7e0f506", "text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.", "title": "" } ]
[ { "docid": "3023637fd498bb183dae72135812c304", "text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common", "title": "" }, { "docid": "fe8c27e7ef05816cc4c4e2c68eeaf2f9", "text": "Chassis cavities have recently been proposed as a new mounting position for vehicular antennas. Cavities can be concealed and potentially offer more space for antennas than shark-fin modules mounted on top of the roof. An antenna cavity for the front or rear edge of the vehicle roof is designed, manufactured and measured for 5.9 GHz. The cavity offers increased radiation in the horizontal plane and to angles below horizon, compared to cavities located in the roof center.", "title": "" }, { "docid": "16c6e41746c451d66b43c5736f622cda", "text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.", "title": "" }, { "docid": "79798f4fbe3cffdf7c90cc5349bf0531", "text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.", "title": "" }, { "docid": "c7a9efee2b447cbadc149717ad7032ee", "text": "We introduce a novel method to learn a policy from unsupervised demonstrations of a process. Given a model of the system and a set of sequences of outputs, we find a policy that has a comparable performance to the original policy, without requiring access to the inputs of these demonstrations. We do so by first estimating the inputs of the system from observed unsupervised demonstrations. Then, we learn a policy by applying vanilla supervised learning algorithms to the (estimated)input-output pairs. For the input estimation, we present a new adaptive linear estimator (AdaL-IE) that explicitly trades-off variance and bias in the estimation. As we show empirically, AdaL-IE produces estimates with lower error compared to the state-of-the-art input estimation method, (UMV-IE) [Gillijns and De Moor, 2007]. Using AdaL-IE in conjunction with imitation learning enables us to successfully learn control policies that consistently outperform those using UMV-IE.", "title": "" }, { "docid": "7f0023af2f3df688aa58ae3317286727", "text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.", "title": "" }, { "docid": "34901b8e3e7667e3a430b70a02595f69", "text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.", "title": "" }, { "docid": "dba1a222903031a6b3d064e6db29a108", "text": "Social engineering is a method of attack involving the exploitation of human weakness, gullibility and ignorance. Although related techniques have existed for some time, current awareness of social engineering and its many guises is relatively low and efforts are therefore required to improve the protection of the user community. This paper begins by examining the problems posed by social engineering, and outlining some of the previous efforts that have been made to address the threat. This leads toward the discussion of a new awareness-raising website that has been specifically designed to aid users in understanding and avoiding the risks. Findings from an experimental trial involving 46 participants are used to illustrate that the system served to increase users’ understanding of threat concepts, as well as providing an engaging environment in which they would be likely to persevere with their learning.", "title": "" }, { "docid": "fa0eebbf9c97942a5992ed80fd66cf10", "text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.", "title": "" }, { "docid": "ff14cc28a72827c14aba42f3a036a088", "text": "Employees’ failure to comply with IS security procedures is a key concern for organizations today. A number of socio-cognitive theories have been used to explain this. However, prior studies have not examined the influence of past and automatic behavior on employee decisions to comply. This is an important omission because past behavior has been assumed to strongly affect decision-making. To address this gap, we integrated habit (a routinized form of past behavior) with Protection Motivation Theory (PMT), to explain compliance. An empirical test showed that habitual IS security compliance strongly reinforced the cognitive processes theorized by PMT, as well as employee intention for future compliance. We also found that nearly all components of PMT significantly impacted employee intention to comply with IS security policies. Together, these results highlighted the importance of addressing employees’ past and automatic behavior in order to improve compliance. 2012 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +1 801 361 2531; fax: +1 509 275 0886. E-mail addresses: anthony@vance.name (A. Vance), mikko.siponen@oulu.fi (M. Siponen), seppo.pahnila@oulu.fi (S. Pahnila). URL: http://www.anthonyvance.com 1 http://www.issrc.oulu.fi/.", "title": "" }, { "docid": "03d41408da6babfc97399c64860f50cd", "text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.", "title": "" }, { "docid": "8c0cbfc060b3a6aa03fd8305baf06880", "text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.", "title": "" }, { "docid": "198944af240d732b6fadcee273c1ba18", "text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.", "title": "" }, { "docid": "24f110f2b34e9da32fbd78ad242808bc", "text": "BACKGROUND\nSurvey research including multiple health indicators requires brief indices for use in cross-cultural studies, which have, however, rarely been tested in terms of their psychometric quality. Recently, the EUROHIS-QOL 8-item index was developed as an adaptation of the WHOQOL-100 and the WHOQOL-BREF. The aim of the current study was to test the psychometric properties of the EUROHIS-QOL 8-item index.\n\n\nMETHODS\nIn a survey on 4849 European adults, the EUROHIS-QOL 8-item index was assessed across 10 countries, with equal samples adjusted for selected sociodemographic data. Participants were also investigated with a chronic condition checklist, measures on general health perception, mental health, health-care utilization and social support.\n\n\nRESULTS\nFindings indicated good internal consistencies across a range of countries, showing acceptable convergent validity with physical and mental health measures, and the measure discriminates well between individuals that report having a longstanding condition and healthy individuals across all countries. Differential item functioning was less frequently observed in those countries that were geographically and culturally closer to the UK, but acceptable across all countries. A universal one-factor structure with a good fit in structural equation modelling analyses (SEM) was identified with, however, limitations in model fit for specific countires.\n\n\nCONCLUSIONS\nThe short EUROHIS-QOL 8-item index showed good cross-cultural field study performance and a satisfactory convergent and discriminant validity, and can therefore be recommended for use in public health research. In future studies the measure should also be tested in multinational clinical studies, particularly in order to test its sensitivity.", "title": "" }, { "docid": "1a7cfc19e7e3f9baf15e4a7450338c33", "text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.", "title": "" }, { "docid": "8b0870c8e975eeff8597eb342cd4f3f9", "text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.", "title": "" }, { "docid": "a31287791b12f55adebacbb93a03c8bc", "text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.", "title": "" }, { "docid": "5a4d88bb879cf441808307961854c58c", "text": "Activity prediction is an essential task in practical human-centered robotics applications, such as security, assisted living, etc., which targets at inferring ongoing human activities based on incomplete observations. To address this challenging problem, we introduce a novel bio-inspired predictive orientation decomposition (BIPOD) approach to construct representations of people from 3D skeleton trajectories. Our approach is inspired by biological research in human anatomy. In order to capture spatio-temporal information of human motions, we spatially decompose 3D human skeleton trajectories and project them onto three anatomical planes (i.e., coronal, transverse and sagittal planes); then, we describe short-term time information of joint motions and encode high-order temporal dependencies. By estimating future skeleton trajectories that are not currently observed, we endow our BIPOD representation with the critical predictive capability. Empirical studies validate that our BIPOD approach obtains promising performance, in terms of accuracy and efficiency, using a physical TurtleBot2 robotic platform to recognize ongoing human activities. Experiments on benchmark datasets further demonstrate that our new BIPOD representation significantly outperforms previous approaches for real-time activity classification and prediction from 3D human skeleton trajectories.", "title": "" }, { "docid": "5ebddfaac62ec66171b65a776c1682b7", "text": "We investigated the reliability of a test assessing quadriceps strength, endurance and fatigability in a single session. We used femoral nerve magnetic stimulation (FMNS) to distinguish central and peripheral factors of neuromuscular fatigue. We used a progressive incremental loading with multiple assessments to limit the influence of subject's cooperation and motivation. Twenty healthy subjects (10 men and 10 women) performed the test on two different days. Maximal voluntary strength and evoked quadriceps responses via FMNS were measured before, after each set of 10 submaximal isometric contractions (5-s on/5-s off; starting at 10% of maximal voluntary strength with 10% increments), immediately and 30min after task failure. The test induced progressive peripheral (41±13% reduction in single twitch at task failure) and central fatigue (3±7% reduction in voluntary activation at task failure). Good inter-day reliability was found for the total number of submaximal contractions achieved (i.e. endurance index: ICC=0.83), for reductions in maximal voluntary strength (ICC>0.81) and evoked muscular responses (i.e. fatigue index: ICC>0.85). Significant sex-differences were also detected. This test shows good reliability for strength, endurance and fatigability assessments. Further studies should be conducted to evaluate its feasibility and reliability in patients.", "title": "" } ]
scidocsrr
08e952323708557df37939ab80bf692e
Continuum regression for cross-modal multimedia retrieval
[ { "docid": "6508fc8732fd22fde8c8ac180a2e19e3", "text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "title": "" }, { "docid": "0d292d5c1875845408c2582c182a6eb9", "text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation", "title": "" } ]
[ { "docid": "5a74a585fb58ff09c05d807094523fb9", "text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.", "title": "" }, { "docid": "e08990fec382e1ba5c089d8bc1629bc5", "text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.", "title": "" }, { "docid": "28531c596a9df30b91d9d1e44d5a7081", "text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.", "title": "" }, { "docid": "7d820e831096dac701e7f0526a8a11da", "text": "We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80% of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.", "title": "" }, { "docid": "05e754e0567bf6859d7a68446fc81bad", "text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?", "title": "" }, { "docid": "dd1fd4f509e385ea8086a45a4379a8b5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "1ed93d114804da5714b7b612f40e8486", "text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.", "title": "" }, { "docid": "d18c77b3d741e1a7ed10588f6a3e75c0", "text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.", "title": "" }, { "docid": "fd6eea8007c3e58664ded211bfbc52f7", "text": "We present our overall third ranking solution for the KDD Cup 2010 on educational data mining. The goal of the competition was to predict a student’s ability to answer questions correctly, based on historic results. In our approach we use an ensemble of collaborative filtering techniques, as used in the field of recommender systems and adopt them to fit the needs of the competition. The ensemble of predictions is finally blended, using a neural network.", "title": "" }, { "docid": "d1c2c0b74caf85f25d761128ed708e6c", "text": "Nearly all our buildings and workspaces are protected against fire breaks, which may occur due to some fault in the electric circuitries and power sources. The immediate alarming and aid to extinguish the fire in such situations of fire breaks are provided using embedded systems installed in the buildings. But as the area being monitored against such fire threats becomes vast, these systems do not provide a centralized solution. For the protection of such a huge area, like a college campus or an industrial park, a centralized wireless fire control system using Wireless sensor network technology is developed. The system developed connects the five dangers prone zones of the campus with a central control room through a ZigBee communication interface such that in case of any fire break in any of the building, a direct communication channel is developed that will send an immediate signal to the control room. In case if any of the emergency zone lies out of reach of the central node, multi hoping technique is adopted for the effective transmitting of the signal. The five nodes maintains a wireless interlink among themselves as well as with the central node for this purpose. Moreover a hooter is attached along with these nodes to notify the occurrence of any fire break such that the persons can leave the building immediately and with the help of the signal received in the control room, the exact building where the fire break occurred is identified and fire extinguishing is done. The real time system developed is implemented in Atmega32 with temperature, fire and humidity sensors and ZigBee module.", "title": "" }, { "docid": "2ff3d496f0174ffc0e3bd21952c8f0ae", "text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:", "title": "" }, { "docid": "f64e65df9db7219336eafb20d38bf8cf", "text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.", "title": "" }, { "docid": "a120d11f432017c3080bb4107dd7ea71", "text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.", "title": "" }, { "docid": "581efb9277c3079a0f2bf59949600739", "text": "Artificial Intelligence methods are becoming very popular in medical applications due to high reliability and ease. From the past decades, Artificial Intelligence techniques such as Artificial Neural Networks, Fuzzy Expert Systems, Robotics etc have found an increased usage in disease diagnosis, patient monitoring, disease risk evaluation, predicting effect of new medicines and robotic handling of surgeries. This paper presents an introduction and survey on different artificial intelligence methods used by researchers for the application of diagnosing or predicting Hypertension. Keywords-Hypertension, Artificial Neural Networks, Fuzzy Systems.", "title": "" }, { "docid": "b236003ad282e973b3ebf270894c2c07", "text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.", "title": "" }, { "docid": "1ad08b9ecc0a08f5e0847547c55ea90d", "text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.", "title": "" }, { "docid": "acd95dfc27228f107fa44b0dc5039b72", "text": "How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.", "title": "" }, { "docid": "87eed35ce26bf0194573f3ed2e6be7ca", "text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.", "title": "" }, { "docid": "9f786e59441784d821da00d07d2fc42e", "text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.", "title": "" } ]
scidocsrr
714fb6dba1be46c6082bc417faf4dcbb
Robust 2D/3D face mask presentation attack detection scheme by exploring multiple features and comparison score level fusion
[ { "docid": "db5865f8f8701e949a9bb2f41eb97244", "text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" } ]
[ { "docid": "53477003e3c57381201a69e7cc54cfc9", "text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.", "title": "" }, { "docid": "69f853b90b837211e24155a2f55b9a95", "text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.", "title": "" }, { "docid": "630e44732755c47fc70be111e40c7b67", "text": "An algebra for geometric reasoning is developed that is amenable to software implementation. The features of the algebra are chosen to support geometric programming of the variety found in computer graphics and computer aided geometric design applications. The implementation of the algebra in C++ is described, and several examples illustrating the use of this software are given.", "title": "" }, { "docid": "071ba3d1cec138011f398cae8589b77b", "text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ff91ed2072c93eeae5f254fb3de0d780", "text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.", "title": "" }, { "docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc", "text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.", "title": "" }, { "docid": "737bc68c51d2ae7665c47a060da3e25f", "text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how", "title": "" }, { "docid": "3c8ac7bd31d133b4d43c0d3a0f08e842", "text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.", "title": "" }, { "docid": "893437dbc30509dc5a1133ab74d4b78b", "text": "Light scattered from multiple surfaces can be used to retrieve information of hidden environments. However, full three-dimensional retrieval of an object hidden from view by a wall has only been achieved with scanning systems and requires intensive computational processing of the retrieved data. Here we use a non-scanning, single-photon single-pixel detector in combination with a deep convolutional artificial neural network: this allows us to locate the position and to also simultaneously provide the actual identity of a hidden person, chosen from a database of people (N = 3). Artificial neural networks applied to specific computational imaging problems can therefore enable novel imaging capabilities with hugely simplified hardware and processing times.", "title": "" }, { "docid": "5b55b1c913aa9ec461c6c51c3d00b11b", "text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.", "title": "" }, { "docid": "da4d3534f0f8cf463d4dfff9760b68f4", "text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.", "title": "" }, { "docid": "729b29b5ab44102541f3ebf8d24efec3", "text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.", "title": "" }, { "docid": "4e2b0d647da57a96085786c5aa2d15d9", "text": "We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropyregularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.", "title": "" }, { "docid": "e0217457b00d4c1ba86fc5d9faede342", "text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.", "title": "" }, { "docid": "02138b6fea0d80a6c365cafcc071e511", "text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.", "title": "" }, { "docid": "8321eecac6f8deb25ffd6c1b506c8ee3", "text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.", "title": "" }, { "docid": "db2e7cc9ea3d58e0c625684248e2ef80", "text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.", "title": "" }, { "docid": "06e04aec6dccf454b63c98b4c5e194e3", "text": "Existing measures of peer pressure and conformity may not be suitable for screening large numbers of adolescents efficiently, and few studies have differentiated peer pressure from theoretically related constructs, such as conformity or wanting to be popular. We developed and validated short measures of peer pressure, peer conformity, and popularity in a sample ( n= 148) of adolescent boys and girls in grades 11 to 13. Results showed that all measures constructed for the study were internally consistent. Although all measures of peer pressure, conformity, and popularity were intercorrelated, peer pressure and peer conformity were stronger predictors of risk behaviors than measures assessing popularity, general conformity, or dysphoria. Despite a simplified scoring format, peer conformity vignettes were equal to if not better than the peer pressure measures in predicting risk behavior. Findings suggest that peer pressure and peer conformity are potentially greater risk factors than a need to be popular, and that both peer pressure and peer conformity can be measured with short scales suitable for large-scale testing.", "title": "" }, { "docid": "6c532169b4e169b9060ab9e17cb42602", "text": "The complete nucleotide sequence of tomato infectious chlorosis virus (TICV) was determined and compared with those of other members of the genus Crinivirus. RNA 1 is 8,271 nucleotides long with three open reading frames and encodes proteins involved in replication. RNA 2 is 7,913 nucleotides long and encodes eight proteins common within the genus Crinivirus that are involved in genome protection, movement and other functions yet to be identified. Similarity between TICV and other criniviruses varies throughout the genome but TICV is related more closely to lettuce infectious yellows virus than to any other crinivirus, thus identifying a third group within the genus.", "title": "" } ]
scidocsrr
35dc79435be5fb76fe57d5813197c79b
A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task
[ { "docid": "565941db0284458e27485d250493fd2a", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" } ]
[ { "docid": "be8815170248d7635a46f07c503e32a3", "text": "ÐStochastic discrimination is a general methodology for constructing classifiers appropriate for pattern recognition. It is based on combining arbitrary numbers of very weak components, which are usually generated by some pseudorandom process, and it has the property that the very complex and accurate classifiers produced in this way retain the ability, characteristic of their weak component pieces, to generalize to new data. In fact, it is often observed, in practice, that classifier performance on test sets continues to rise as more weak components are added, even after performance on training sets seems to have reached a maximum. This is predicted by the underlying theory, for even though the formal error rate on the training set may have reached a minimum, more sophisticated measures intrinsic to this method indicate that classifier performance on both training and test sets continues to improve as complexity increases. In this paper, we begin with a review of the method of stochastic discrimination as applied to pattern recognition. Through a progression of examples keyed to various theoretical issues, we discuss considerations involved with its algorithmic implementation. We then take such an algorithmic implementation and compare its performance, on a large set of standardized pattern recognition problems from the University of California Irvine, and Statlog collections, to many other techniques reported on in the literature, including boosting and bagging. In doing these studies, we compare our results to those reported in the literature by the various authors for the other methods, using the same data and study paradigms used by them. Included in this paper is an outline of the underlying mathematical theory of stochastic discrimination and a remark concerning boosting, which provides a theoretical justification for properties of that method observed in practice, including its ability to generalize. Index TermsÐPattern recognition, classification algorithms, stochastic discrimination, SD.", "title": "" }, { "docid": "78c89f8aec24989737575c10b6bbad90", "text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.", "title": "" }, { "docid": "e43d32bdad37002f70d797dd3d5bd5eb", "text": "Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into “slices”, and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.", "title": "" }, { "docid": "ce901f6509da9ab13d66056319c15bd8", "text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.", "title": "" }, { "docid": "2eaa686e4808b3c613a5061dc5bb14a7", "text": "To date, there is little information on the impact of more aggressive treatment regimen such as BEACOPP (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone) on the fertility of male patients with Hodgkin lymphoma (HL). We evaluated the impact of BEACOPP regimen on fertility status in 38 male patients with advanced-stage HL enrolled into trials of the German Hodgkin Study Group (GHSG). Before treatment, 6 (23%) patients had normozoospermia and 20 (77%) patients had dysspermia. After treatment, 34 (89%) patients had azoospermia, 4 (11%) had other dysspermia, and no patients had normozoospermia. There was no difference in azoospermia rate between patients treated with BEACOPP baseline and those given BEACOPP escalated (93% vs 87%, respectively; P > .999). After treatment, most of patients (93%) had abnormal values of follicle-stimulating hormone, whereas the number of patients with abnormal levels of testosterone and luteinizing hormone was less pronounced-57% and 21%, respectively. In univariate analysis, none of the evaluated risk factors (ie, age, clinical stage, elevated erythrocyte sedimentation rate, B symptoms, large mediastinal mass, extranodal disease, and 3 or more lymph nodes) was statistically significant. Male patients with HL are at high risk of infertility after treatment with BEACOPP.", "title": "" }, { "docid": "ee07cf061a1a3b7283c22434dcabd4eb", "text": "Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "ff0d9abbfce64e83576d7e0eb235a46b", "text": "For multi-copter unmanned aerial vehicles (UAVs) sensing of the actual altitude is an important task. Many functions providing increased flight safety and easy maneuverability rely on altitude data. Commonly used sensors provide the altitude only relative to the starting position, or are limited in range and/or resolution. With the 77 GHz FMCW radar-based altimeter presented in this paper not only the actual altitude over ground but also obstacles such as trees and bushes can be detected. The capability of this solution is verified by measurements over different terrain and vegetation.", "title": "" }, { "docid": "06ba81270357c9bcf1dd8f1871741537", "text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using", "title": "" }, { "docid": "e85e66b6ad6324a07ca299bf4f3cd447", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "68420190120449343006879e23be8789", "text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.", "title": "" }, { "docid": "1bea3fdeb0ca47045a64771bd3925e11", "text": "The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context. Traditional supervised methods only use labeled data (context), while missing rich lexical knowledge such as the gloss which defines the meaning of a word sense. Recent studies have shown that incorporating glosses into neural networks for WSD has made significant improvement. However, the previous models usually build the context representation and gloss representation separately. In this paper, we find that the learning for the context and gloss representation can benefit from each other. Gloss can help to highlight the important words in the context, thus building a better context representation. Context can also help to locate the key words in the gloss of the correct word sense. Therefore, we introduce a co-attention mechanism to generate co-dependent representations for the context and gloss. Furthermore, in order to capture both word-level and sentence-level information, we extend the attention mechanism in a hierarchical fashion. Experimental results show that our model achieves the state-of-the-art results on several standard English all-words WSD test datasets.", "title": "" }, { "docid": "2acb16f1e67f141220dc05b90ac23385", "text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.", "title": "" }, { "docid": "df679dcd213842a786c1ad9587c66f77", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "9c857daee24f793816f1cee596e80912", "text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …", "title": "" }, { "docid": "6d329c1fa679ac201387c81f59392316", "text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.", "title": "" }, { "docid": "b0eec6d5b205eafc6fcfc9710e9cf696", "text": "The reflectarray antenna is a substitution of reflector antennas by making use of planar phased array techniques [1]. The array elements are specially designed, providing proper phase compensations to the spatial feed through various techniques [2–4]. The bandwidth limitation due to microstrip structures has led to various multi-band designs [5–6]. In these designs, the multi-band performance is realized through multi-layer structures, causing additional volume requirement and fabrication cost. An alternative approach is provided in [7–8], where single-layer structures are adopted. The former [7] implements a dual-band linearly polarized reflectarray whereas the latter [8] establishes a single-layer tri-band concept with circular polarization (CP). In this paper, a prototype based on the conceptual structure in [8] is designed, fabricated, and measured. The prototype is composed of three sub-arrays on a single layer. They have pencil beam patterns at 32 GHz (Ka-band), 8.4 GHz (X-band), and 7.1 GHz (C-band), respectively. Considering the limited area, two phase compensation techniques are adopted by these sub-arrays. The varied element size (VES) technique is applied to the C-band, whereas the element rotation (ER) technique is used in both X-band and Ka-band.", "title": "" }, { "docid": "42db85c2e0e243c5e31895cfc1f03af6", "text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.", "title": "" }, { "docid": "0506a05ff43ae7590809015bfb37cf01", "text": "The balanced business scorecard is a widely-used management framework for optimal measurement of organizational performance. Explains that the scorecard originated in an attempt to address the problem of systems apparently not working. However, the problem proved to be less the information systems than the broader organizational systems, specifically business performance measurement. Discusses the fundamental points to cover in implementation of the scorecard. Presents ten “golden rules” developed as a means of bringing the framework closer to practical application. The Nolan Norton Institute developed the balanced business scorecard in 1990, resulting in the much-referenced Harvard Business Review article, “Measuring performance in the organization of the future”, by Robert Kaplan and David Norton. The balanced scorecard supplemented traditional financial measures with three additional perspectives: customers, internal business processes and learning and growth. Currently, the balanced business scorecard is a powerful and widely-accepted framework for defining performance measures and communicating objectives and vision to the organization. Many companies around the world have worked with the balanced business scorecard but experiences vary. Based on practical experiences of clients of Nolan, Norton & Co. and KPMG in putting the balanced business scorecard to work, the following ten golden rules for its implementation have been determined: 1 There are no standard solutions: all businesses differ. 2 Top management support is essential. 3 Strategy is the starting point. 4 Determine a limited and balanced number of objectives and measures. 5 No in-depth analyses up front, but refine and learn by doing. 6 Take a bottom-up and top-down approach. 7 It is not a systems issue, but systems are an issue. 8 Consider delivery systems at the start. 9 Consider the effect of performance indicators on behaviour. 10 Not all measures can be quantified.", "title": "" } ]
scidocsrr
6facc49979ae27f41164bba62992f4c6
Emotional Human Machine Conversation Generation Based on SeqGAN
[ { "docid": "f7696fca636f8959a1d0fbeba9b2fb67", "text": "With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" }, { "docid": "33468c214408d645651871bd8018ed82", "text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.", "title": "" } ]
[ { "docid": "d38e5fa4adadc3e979c5de812599c78a", "text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.", "title": "" }, { "docid": "affbc18a3ba30c43959e37504b25dbdc", "text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.", "title": "" }, { "docid": "fbecc8c4a8668d403df85b4e52348f6e", "text": "Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurré.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.", "title": "" }, { "docid": "f00b9a311fb8b14100465c187c9e4659", "text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.", "title": "" }, { "docid": "fba0ff24acbe07e1204b5fe4c492ab72", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "f43ed3feda4e243a1cb77357b435fb52", "text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1", "title": "" }, { "docid": "d90a66cf63abdc1d0caed64812de7043", "text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.", "title": "" }, { "docid": "955376cf6d04373c407987613d1c2bd1", "text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.", "title": "" }, { "docid": "56fa6f96657182ff527e42655bbd0863", "text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.", "title": "" }, { "docid": "c26eabb377db5f1033ec6d354d890a6f", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "a712b6efb5c869619864cd817c2e27e1", "text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.", "title": "" }, { "docid": "1072728cf72fe02d3e1f3c45bfc877b5", "text": "The annihilating filter-based low-rank Hanel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. Inspired by the recent mathematical discovery that links deep neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional re-gridding layer. Extensive numerical experiments show that the proposed deep learning method significantly outperforms the existing image-domain deep learning approaches.", "title": "" }, { "docid": "dc4a08d2b98f1e099227c4f80d0b84df", "text": "We address action temporal localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in action temporal localization via multi-stage segment-based 3D ConvNets: (1) a proposal stage identifies candidate segments in a long video that may contain actions; (2) a classification stage learns one-vs-all action classification model to serve as initialization for the localization stage; and (3) a localization stage fine-tunes on the model learnt in the classification stage to localize each action instance. We propose a novel loss function for the localization stage to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increased from 15.0% to 19.0% on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.", "title": "" }, { "docid": "c21e999407da672be5bac4eaba950168", "text": "Software engineers are frequently faced with tasks that can be expressed as optimization problems. To support them with automation, search-based model-driven engineering combines the abstraction power of models with the versatility of meta-heuristic search algorithms. While current approaches in this area use genetic algorithms with xed mutation operators to explore the solution space, the e ciency of these operators may heavily depend on the problem at hand. In this work, we propose FitnessStudio, a technique for generating e cient problem-tailored mutation operators automatically based on a two-tier framework. The lower tier is a regular meta-heuristic search whose mutation operator is trained by an upper-tier search using a higher-order model transformation. We implemented this framework using the Henshin transformation language and evaluated it in a benchmark case, where the generated mutation operators enabled an improvement to the state of the art in terms of result quality, without sacri cing performance.", "title": "" }, { "docid": "5950aadef33caa371f0de304b2b4869d", "text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.", "title": "" }, { "docid": "1b063dfecff31de929383b8ab74f7f6b", "text": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the “Adam-type”, includes the popular algorithms such as Adam (Kingma & Ba, 2014) , AMSGrad (Reddi et al., 2018) , AdaGrad (Duchi et al., 2011). Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log T/ √ T ) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.", "title": "" }, { "docid": "8c03df6650b3e400bc5447916d01820a", "text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.", "title": "" }, { "docid": "b8dfe30c07f0caf46b3fc59406dbf017", "text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.", "title": "" }, { "docid": "139f750d4e53b86bc785785b7129e6ee", "text": "Enterprise Resource Planning (ERP) systems hold great promise for integrating business processes and have proven their worth in a variety of organizations. Yet the gains that they have enabled in terms of increased productivity and cost savings are often achieved in the face of daunting usability problems. While one frequently hears anecdotes about the difficulties involved in using ERP systems, there is little documentation of the types of problems typically faced by users. The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 company in the first years of its large-scale ERP implementation. This study also demonstrates the promise of using collaboration theory to evaluate usability characteristics of existing systems and to design new systems. Given the impressive results already achieved by some corporations with these systems, imagine how much more would be possible if understanding how to use them weren’t such an", "title": "" }, { "docid": "7b1b0e31384cb99caf0f3d8cf8134a53", "text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.", "title": "" } ]
scidocsrr
abbd4694897bb5c4fd5866f00de2d593
Aesthetics and credibility in web site design
[ { "docid": "e7c8abf3387ba74ca0a6a2da81a26bc4", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "36a615660b8f0c60bef06b5a57887bd1", "text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of  quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.", "title": "" }, { "docid": "dfa5334f77bba5b1eeb42390fed1bca3", "text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.", "title": "" }, { "docid": "bf08d673b40109d6d6101947258684fd", "text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.", "title": "" }, { "docid": "f285815e47ea0613fb1ceb9b69aee7df", "text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "title": "" }, { "docid": "aa418cfd93eaba0d47084d0b94be69b8", "text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.", "title": "" }, { "docid": "35b82263484452d83519c68a9dfb2778", "text": "S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car", "title": "" }, { "docid": "bdfb48fcd7ef03d913a41ca8392552b6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "dd51e9bed7bbd681657e8742bb5bf280", "text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a", "title": "" }, { "docid": "ed0d2151f5f20a233ed8f1051bc2b56c", "text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "30db2040ab00fd5eec7b1eb08526f8e8", "text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.", "title": "" }, { "docid": "19f604732dd88b01e1eefea1f995cd54", "text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.", "title": "" }, { "docid": "d9950f75380758d0a0f4fd9d6e885dfd", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "b1e2326ebdf729e5b55822a614b289a9", "text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.", "title": "" }, { "docid": "4a72f9b04ba1515c0d01df0bc9b60ed7", "text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.", "title": "" }, { "docid": "91bf842f809dd369644ffd2b10b9c099", "text": "We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance.", "title": "" }, { "docid": "4fea653dd0dd8cb4ac941b2368ceb78f", "text": "During present study the antibacterial activity of black pepper (Piper nigrum Linn.) and its mode of action on bacteria were done. The extracts of black pepper were evaluated for antibacterial activity by disc diffusion method. The minimum inhibitory concentration (MIC) was determined by tube dilution method and mode of action was studied on membrane leakage of UV260 and UV280 absorbing material spectrophotometrically. The diameter of the zone of inhibition against various Gram positive and Gram negative bacteria was measured. The MIC was found to be 50-500ppm. Black pepper altered the membrane permeability resulting the leakage of the UV260 and UV280 absorbing material i.e., nucleic acids and proteins into the extra cellular medium. The results indicate excellent inhibition on the growth of Gram positive bacteria like Staphylococcus aureus, followed by Bacillus cereus and Streptococcus faecalis. Among the Gram negative bacteria Pseudomonas aeruginosa was more susceptible followed by Salmonella typhi and Escherichia coli.", "title": "" }, { "docid": "e812bed02753b807d1e03a2e05e87cb8", "text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" } ]
scidocsrr
2577cdc082a2d03bd66bf2e56128a68b
Making Learning and Web 2.0 Technologies Work for Higher Learning Institutions in Africa
[ { "docid": "b9e7fedbc42f815b35351ec9a0c31b33", "text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ", "title": "" } ]
[ { "docid": "90d33a2476534e542e2722d7dfa26c91", "text": "Despite some notable and rare exceptions and after many years of relatively neglect (particularly in the ‘upper echelons’ of IS research), there appears to be some renewed interest in Information Systems Ethics (ISE). This paper reflects on the development of ISE by assessing the use and development of ethical theory in contemporary IS research with a specific focus on the ‘leading’ IS journals (according to the Association of Information Systems). The focus of this research is to evaluate if previous calls for more theoretically informed work are permeating the ‘upper echelons’ of IS research and if so, how (Walsham 1996; Smith and Hasnas 1999; Bell and Adam 2004). For the purposes of scope, this paper follows on from those previous studies and presents a detailed review of the leading IS publications between 2005to2007 inclusive. After several processes, a total of 32 papers are evaluated. This review highlights that whilst ethical topics are becoming increasingly popular in such influential media, most of the research continues to neglect considerations of ethical theory with preferences for a range of alternative approaches. Finally, this research focuses on some of the papers produced and considers how the use of ethical theory could contribute.", "title": "" }, { "docid": "ed176e79496053f1c4fdee430d1aa7fc", "text": "Event recognition systems rely on knowledge bases of event definitions to infer occurrences of events in time. Using a logical framework for representing and reasoning about events offers direct connections to machine learning, via Inductive Logic Programming (ILP), thus allowing to avoid the tedious and error-prone task of manual knowledge construction. However, learning temporal logical formalisms, which are typically utilized by logic-based event recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data is usually massive and collected at different times and under various circumstances. Ideally, systems that learn from temporal data should be able to operate in an incremental mode, that is, revise prior constructed knowledge in the face of new evidence. In this work we present an incremental method for learning and revising event-based knowledge, in the form of Event Calculus programs. The proposed algorithm relies on abductive–inductive learning and comprises a scalable clause refinement methodology, based on a compressive summarization of clause coverage in a stream of examples. We present an empirical evaluation of our approach on real and synthetic data from activity recognition and city transport applications.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "90563706ada80e880b7fcf25489f9b27", "text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.", "title": "" }, { "docid": "1bc33dcf86871e70bd3b7856fd3c3857", "text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.", "title": "" }, { "docid": "0c88535a3696fe9e2c82f8488b577284", "text": "Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR applica‐ tion. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.", "title": "" }, { "docid": "04e9383039f64bf5ef90e59ba451e45f", "text": "The current generation of manufacturing systems relies on monolithic control software which provides real-time guarantees but is hard to adapt and reuse. These qualities are becoming increasingly important for meeting the demands of a global economy. Ongoing research and industrial efforts therefore focus on service-oriented architectures (SOA) to increase the control software’s flexibility while reducing development time, effort and cost. With such encapsulated functionality, system behavior can be expressed in terms of operations on data and the flow of data between operators. In this thesis we consider industrial real-time systems from the perspective of distributed data processing systems. Data processing systems often must be highly flexible, which can be achieved by a declarative specification of system behavior. In such systems, a user expresses the properties of an acceptable solution while the system determines a suitable execution plan that meets these requirements. Applied to the real-time control domain, this means that the user defines an abstract workflow model with global timing constraints from which the system derives an execution plan that takes the underlying system environment into account. The generation of a suitable execution plan often is NP-hard and many data processing systems rely on heuristic solutions to quickly generate high quality plans. We utilize heuristics for finding real-time execution plans. Our evaluation shows that heuristics were successful in finding a feasible execution plan in 99% of the examined test cases. Lastly, data processing systems are engineered for an efficient exchange of data and therefore are usually built around a direct data flow between the operators without a mediating entity in between. Applied to SOA-based automation, the same principle is realized through service choreographies with direct communication between the individual services instead of employing a service orchestrator which manages the invocation of all services participating in a workflow. These three principles outline the main contributions of this thesis: A flexible reconfiguration of SOA-based manufacturing systems with verifiable real-time guarantees, fast heuristics based planning, and a peer-to-peer execution model for SOAs with clear semantics. We demonstrate these principles within a demonstrator that is close to a real-world industrial system.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "ed282d88b5f329490f390372c502f238", "text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.", "title": "" }, { "docid": "e87617852de3ce25e1955caf1f4c7a21", "text": "Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert s operator CITED BY (354) 1 Gra a, R. F. P. S. O. (2012). Segmenta o de imagens tor cicas de Raio-X (Doctoral dissertation, UNIVERSIDADE DA BEIRA INTERIOR). 2 ZENDI, M., & YILMAZ, A. (2013). DEGISIK BAKIS A ILARINDAN ELDE EDILEN G R NT GRUPLARININ SINIFLANDIRILMASI. Journal of Aeronautics & Space Technolog ies/Havacilik ve Uzay Teknolojileri Derg is i, 6(1). 3 TROFINO, A. F. N. (2014). TRABALHO DE CONCLUS O DE CURSO. 4 Juan Albarrac n, J. (2011). Dise o, an lis is y optimizaci n de un s istema de reconocimiento de im genes basadas en contenido para imagen publicitaria (Doctoral dissertation). 5 Bergues, G., Ames, G., Canali, L., Schurrer, C., & Fles ia, A. G. (2014, June). Detecci n de l neas en im genes con ruido en un entorno de medici n de alta precis i n. In Biennial Congress of Argentina (ARGENCON), 2014 IEEE (pp. 582-587). IEEE. 6 Andrianto, D. S. (2013). Analisa Statistik terhadap perubahan beberapa faktor deteksi kemacetan melalui pemrosesan video beserta peng iriman notifikas i kemacetan. Jurnal Sarjana ITB bidang Teknik Elektro dan Informatika, 2(1). 7 Pier g , M., & Jaskowiec, J. Identyfikacja twarzy z wykorzystaniem Sztucznych Sieci Neuronowych oraz PCA. 8 Nugraha, K. A., Santoso, A. J., & Suselo, T. (2015, July). ALGORITMA BACKPROPAGATION PADA JARINGAN SARAF TIRUAN UNTUK PENGENALAN POLA WAYANG KULIT. In Seminar Nasional Informatika 2008 (Vol. 1, No. 4). 9 Cornet, T. (2012). Formation et D veloppement des Lacs de Titan: Interpr tation G omorpholog ique d'Ontario Lacus et Analogues Terrestres (Doctoral dissertation, Ecole Centrale de Nantes (ECN)(ECN)(ECN)(ECN)). 10 Li, L., Sun, L., Ning , G., & Tan, S. (2014). Automatic Pavement Crack Recognition Based on BP Neural Network. PROMET-Traffic&Transportation, 26(1), 11-22. 11 Quang Hong , N., Khanh Quoc, D., Viet Anh, N., Chien Van, T., ???, & ???. (2015). Rate Allocation for Block-based Compressive Sensing . Journal of Broadcast Eng ineering , 20(3), 398-407. 12 Swillo, S. (2013). Zastosowanie techniki wizyjnej w automatyzacji pomiar w geometrii i podnoszeniu jakosci wyrob w wytwarzanych w przemysle motoryzacyjnym. Prace Naukowe Politechniki Warszawskiej. Mechanika, (257), 3-128. 13 V zina, M. (2014). D veloppement de log iciels de thermographie infrarouge visant am liorer le contr le de la qualit de la pose de l enrob bitumineux. 14 Decourselle, T. (2014). Etude et mod lisation du comportement des gouttelettes de produits phytosanitaires sur les feuilles de vigne par imagerie ultra-rapide et analyse de texture (Doctoral dissertation, Univers it de Bourgogne). 15 Reja, I. D., & Santoso, A. J. (2013). Pengenalan Motif Sarung (Utan Maumere) Menggunakan Deteksi Tepi. Semantik, 3(1). 16 Feng , Y., & Chen, F. (2013). Fast volume measurement algorithm based on image edge detection. Journal of Computer Applications, 6, 064. 17 Krawczuk, A., & Dominczuk, J. (2014). The use of computer image analys is in determining adhesion properties . Applied Computer Science, 10(3), 68-77. 18 Hui, L., Park, M. W., & Brilakis , I. (2014). Automated Brick Counting for Fa ade Construction Progress Estimation. Journal of Computing in Civil Eng ineering , 04014091. 19 Mahmud, S., Mohammed, J., & Muaidi, H. (2014). A Survey of Dig ital Image Processing Techniques in Character Recognition. IJCSNS, 14(3), 65. 20 Yazdanparast, E., Dos Anjos , A., Garcia, D., Loeuillet, C., Shahbazkia, H. R., & Vergnes, B. (2014). INsPECT, an Open-Source and Versatile Software for Automated Quantification of (Leishmania) Intracellular Parasites . 21 Furtado, L. F. F., Trabasso, L. G., Villani, E., & Francisco, A. (2012, December). Temporal filter applied to image sequences acquired by an industrial robot to detect defects in large aluminum surfaces areas. In MECHATRONIKA, 2012 15th International Symposium (pp. 1-6). IEEE. 22 Zhang , X. H., Li, G., Li, C. L., Zhang , H., Zhao, J., & Hou, Z. X. (2015). Stereo Matching Algorithm Based on 2D Delaunay Triangulation. Mathematical Problems in Eng ineering , 501, 137193. 23 Hasan, H. M. Image Based Vehicle Traffic Measurement. 24 Taneja, N. PERFORMANCE EVALUATION OF IMAGE SEGMENTATION TECHNIQUES USED FOR QUALITATIVE ANALYSIS OF MEMBRANE FILTER. 25 Mathur, A., & Mathur, R. (2013). Content Based Image Retrieval by Multi Features us ing Image Blocks. International Journal of Advanced Computer Research, 3(4), 251. 26 Pandey, A., Pant, D., & Gupta, K. K. (2013). A Novel Approach on Color Image Refocusing and Defocusing . International Journal of Computer Applications, 73(3), 13-17. 27 S le, I. (2014). The determination of the twist level of the Chenille yarn using novel image processing methods: Extraction of axial grey-level characteristic and multi-step gradient based thresholding . Dig ital Signal Processing , 29, 78-99. 28 Azzabi, T., Amor, S. B., & Nejim, S. (2014, November). Obstacle detection for Unmanned Surface Vehicle. In Electrical Sciences and Technolog ies in Maghreb (CISTEM), 2014 International Conference on (pp. 1-7). IEEE. 29 Zacharia, K., Elias , E. P., & Varghese, S. M. (2012). Personalised product design using virtual interactive techniques. arXiv preprint arXiv:1202.1808. 30 Kim, J. H., & Lattimer, B. Y. (2015). Real-time probabilis tic class ification of fire and smoke using thermal imagery for intelligent firefighting robot. Fire Safety Journal, 72, 40-49. 31 N ez, J. M. Edge detection for Very High Resolution Satellite Imagery based on Cellular Neural Network. Advances in Pattern Recognition, 55. 32 Capobianco, J., Pallone, G., & Daudet, L. (2012, October). Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection Approach. In Audio Eng ineering Society Convention 133. Audio Eng ineering Society. 33 zt rk, S., & Akdemir, B. (2015). Comparison of Edge Detection Algorithms for Texture Analys is on Glass Production. Procedia-Social and Behavioral Sciences, 195, 2675-2682. 34 Ahmed, A. M., & Elramly, S. Hyperspectral Data Compression Based On Weighted Prediction. 35 Jayas, D. S. A. Manickavasagan, HN Al-Shekaili, G. Thomas, MS Rahman, N. Guizani &. 36 Khashu, S., Vijayanagar, S., Manikantan, K., & Ramachandran, S. (2014, February). Face Recognition using Dual Wavelet Transform and Filter-Transformed Flipping . In Electronics and Communication Systems (ICECS), 2014 International Conference on (pp. 1-7). IEEE. 37 Brown, R. C. (2014). IRIS: Intelligent Roadway Image Segmentation using an Adaptive Reg ion of Interest (Doctoral dissertation, Virg inia Polytechnic Institute and State Univers ity). 38 Huang , L., Zuo, X., Fang , Y., & Yu, X. A Segmentation Algorithm for Remote Sensing Imag ing Based on Edge and Heterogeneity of Objects . 39 Park, J., Kim, Y., & Kim, S. (2015). Landing Site Searching and Selection Algorithm Development Using Vis ion System and Its Application to Quadrotor. Control Systems Technology, IEEE Transactions on, 23(2), 488-503. 40 Sikchi, P., Beknalkar, N., & Rane, S. Real-Time Cartoonization Using Raspberry Pi. 41 Bachmakov, E., Molina, C., & Wynne, R. (2014, March). Image-based spectroscopy for environmental monitoring . In SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 90620B-90620B). International Society for Optics and Photonics . 42 Kulyukin, V., & Zaman, T. (2014). Vis ion-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera Alignment Constraints . International Journal of Image Processing (IJIP), 8(5), 355. 43 Sandhu, E. M. S., Mutneja, E. V., & Nishi, E. Image Edge Detection by Using Rule Based Fuzzy Class ifier. 44 Tarwani, K. M., & Bhoyar, K. K. Approaches to Gender Class ification using Facial Images. 45 Kuppili, S. K., & Prasad, P. M. K. (2015). Design of Area Optimized Sobel Edge Detection. In Computational Intelligence in Data Mining-Volume 2 (pp. 647-655). Springer India. 46 Singh, R. K., Shaw, D. K., & Alam, M. J. (2015). Experimental Studies of LSB Watermarking with Different Noise. Procedia Computer Science, 54, 612-620. 47 Xu, Y., Da-qiao, Z., Da-wei, D., Bo, W., & Chao-nan, T. (2014, July). A speed monitoring method in steel pipe of 3PE-coating process based on industrial Charge-coupled Device. In Control Conference (CCC), 2014 33rd Chinese (pp. 2908-2912). IEEE. 48 Yasiran, S. S., Jumaat, A. K., Malek, A. A., Hashim, F. H., Nasrir, N., Hassan, S. N. A. S., ... & Mahmud, R. (1987). Microcalcifications Segmentation using Three Edge Detection Techniques on Mammogram Images. 49 Roslan, N., Reba, M. N. M., Askari, M., & Halim, M. K. A. (2014, February). Linear and non-linear enhancement for sun g lint reduction in advanced very high resolution radiometer (AVHRR) image. In IOP Conference Series : Earth and Environmental Science (Vol. 18, No. 1, p. 012041). IOP Publishing . 50 Gupta, P. K. D., Pattnaik, S., & Nayak, M. (2014). Inter-level Spatial Cloud Compression Algorithm. Defence Science Journal, 64(6), 536-541. 51 Foster, R. (2015). A comparison of machine learning techniques for hand shape recogn", "title": "" }, { "docid": "b2e493de6e09766c4ddbac7de071e547", "text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development", "title": "" }, { "docid": "49f21df66ac901e5f37cff022353ed20", "text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.", "title": "" }, { "docid": "c070020d88fb77f768efa5f5ac2eb343", "text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.", "title": "" }, { "docid": "77796f30d8d1604c459fb3f3fe841515", "text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: marc.goetschalckx@isye.gatech.edu (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X", "title": "" }, { "docid": "885a51f55d5dfaad7a0ee0c56a64ada3", "text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "15886d83be78940609c697b30eb73b13", "text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.", "title": "" }, { "docid": "9b7ff8a7dec29de5334f3de8d1a70cc3", "text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.", "title": "" }, { "docid": "1d29f224933954823228c25e5e99980e", "text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
64f762aaf0e35b18b6c5c9804f5fcf45
HAGP: A Hub-Centric Asynchronous Graph Processing Framework for Scale-Free Graph
[ { "docid": "216d4c4dc479588fb91a27e35b4cb403", "text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.", "title": "" }, { "docid": "e9b89400c6bed90ac8c9465e047538e7", "text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.", "title": "" } ]
[ { "docid": "3f5f7b099dff64deca2a265c89ff481e", "text": "We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.", "title": "" }, { "docid": "176dc8d5d0ed24cc9822924ae2b8ca9b", "text": "Detection of image forgery is an important part of digital forensics and has attracted a lot of attention in the past few years. Previous research has examined residual pattern noise, wavelet transform and statistics, image pixel value histogram and other features of images to authenticate the primordial nature. With the development of neural network technologies, some effort has recently applied convolutional neural networks to detecting image forgery to achieve high-level image representation. This paper proposes to build a convolutional neural network different from the related work in which we try to understand extracted features from each convolutional layer and detect different types of image tampering through automatic feature learning. The proposed network involves five convolutional layers, two full-connected layers and a Softmax classifier. Our experiment has utilized CASIA v1.0, a public image set that contains authentic images and splicing images, and its further reformed versions containing retouching images and re-compressing images as the training data. Experimental results can clearly demonstrate the effectiveness and adaptability of the proposed network.", "title": "" }, { "docid": "5c0a3aa0a50487611a64905655164b89", "text": "Cloud radio access network (C-RAN) refers to the visualization of base station functionalities by means of cloud computing. This results in a novel cellular architecture in which low-cost wireless access points, known as radio units or remote radio heads, are centrally managed by a reconfigurable centralized \"cloud\", or central, unit. C-RAN allows operators to reduce the capital and operating expenses needed to deploy and maintain dense heterogeneous networks. This critical advantage, along with spectral efficiency, statistical multiplexing and load balancing gains, make C-RAN well positioned to be one of the key technologies in the development of 5G systems. In this paper, a succinct overview is presented regarding the state of the art on the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts.", "title": "" }, { "docid": "95bbe5d13f3ca5f97d01f2692a9dc77a", "text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.", "title": "" }, { "docid": "af973255ab5f85a5dfb8dd73c19891a0", "text": "I use the example of the 2000 US Presidential election to show that political controversies with technical underpinnings are not resolved by technical means. Then, drawing from examples such as climate change, genetically modified foods, and nuclear waste disposal, I explore the idea that scientific inquiry is inherently and unavoidably subject to becoming politicized in environmental controversies. I discuss three reasons for this. First, science supplies contesting parties with their own bodies of relevant, legitimated facts about nature, chosen in part because they help make sense of, and are made sensible by, particular interests and normative frameworks. Second, competing disciplinary approaches to understanding the scientific bases of an environmental controversy may be causally tied to competing value-based political or ethical positions. The necessity of looking at nature through a variety of disciplinary lenses brings with it a variety of normative lenses, as well. Third, it follows from the foregoing that scientific uncertainty, which so often occupies a central place in environmental controversies, can be understood not as a lack of scientific understanding but as the lack of coherence among competing scientific understandings, amplified by the various political, cultural, and institutional contexts within which science is carried out. In light of these observations, I briefly explore the problem of why some types of political controversies become “scientized” and others do not, and conclude that the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems. © 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b6b4de917de527351939c3493581275", "text": "Several studies have used the Edinburgh Postnatal Depression Scale (EPDS), developed to screen new mothers, also for new fathers. This study aimed to further contribute to this knowledge by comparing assessment of possible depression in fathers and associated demographic factors by the EPDS and the Gotland Male Depression Scale (GMDS), developed for \"male\" depression screening. The study compared EPDS score ≥10 and ≥12, corresponding to minor and major depression, respectively, in relation to GMDS score ≥13. At 3-6 months after child birth, a questionnaire was sent to 8,011 fathers of whom 3,656 (46%) responded. The detection of possibly depressed fathers by EPDS was 8.1% at score ≥12, comparable to the 8.6% detected by the GMDS. At score ≥10, the proportion detected by EPDS increased to 13.3%. Associations with possible risk factors were analyzed for fathers detected by one or both scales. A low income was associated with depression in all groups. Fathers detected by EPDS alone were at higher risk if they had three or more children, or lower education. Fathers detected by EPDS alone at score ≥10, or by both scales at EPDS score ≥12, more often were born in a foreign country. Seemingly, the EPDS and the GMDS are associated with different demographic risk factors. The EPDS score appears critical since 5% of possibly depressed fathers are excluded at EPDS cutoff 12. These results suggest that neither scale alone is sufficient for depression screening in new fathers, and that the decision of EPDS cutoff is crucial.", "title": "" }, { "docid": "4d2c5785e60fa80febb176165622fca7", "text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.", "title": "" }, { "docid": "5946378b291a1a0e1fb6df5cd57d716f", "text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)", "title": "" }, { "docid": "a27b626618e225b03bec1eea8327be4d", "text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.", "title": "" }, { "docid": "8654b1d03f46c1bb94b237977c92ff02", "text": "Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage and fault-revelation remains unknown, yielding uncertainty and controversy. Most previous studies rely on the Clean Program Assumption, that a test suite will obtain similar coverage for both faulty and fixed ('clean') program versions. This assumption may appear intuitive, especially for bugs that denote small semantic deviations. However, we present evidence that the Clean Program Assumption does not always hold, thereby raising a critical threat to the validity of previous results. We then conducted a study using a robust experimental methodology that avoids this threat to validity, from which our primary finding is that strong mutation testing has the highest fault revelation of four widely-used criteria. Our findings also revealed that fault revelation starts to increase significantly only once relatively high levels of coverage are attained.", "title": "" }, { "docid": "897fb39d295defc4b6e495236a2c74b1", "text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.", "title": "" }, { "docid": "1e9e3fce7ae4e980658997c2984f05cb", "text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.", "title": "" }, { "docid": "7b341e406c28255d3cb4df5c4665062d", "text": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed MRU encoders are characterized by multi-ranged gating, executing a series of parameterized contractand-expand layers for learning gating vectors that benefit from long and short-term dependencies. The aims of our approach are as follows: (1) learning representations that are concurrently aware of long and short-term context, (2) modeling relationships between intra-document blocks and (3) fast and efficient sequence encoding. We show that our proposed encoder demonstrates promising results both as a standalone encoder and as well as a complementary building block. We conduct extensive experiments on three challenging MC datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic Fusion Networks) by 1.5% − 6% without using any recurrent or convolution layers. Similarly, we achieve competitive performance relative to AMANDA [17] on the SearchQA benchmark and BiDAF [23] on the NarrativeQA benchmark without using any LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM architectures further improves performance, achieving state-of-the-art results.", "title": "" }, { "docid": "d78609519636e288dae4b1fce36cb7a6", "text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.", "title": "" }, { "docid": "c404e6ecb21196fec9dfeadfcb5d4e4b", "text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.", "title": "" }, { "docid": "d2c0e71db2957621eca42bdc221ffb8f", "text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.", "title": "" }, { "docid": "ff83e090897ed7b79537392801078ffb", "text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.", "title": "" }, { "docid": "6562b9b46d17bf983bcef7f486ecbc36", "text": "Upper-extremity venous thrombosis often presents as unilateral arm swelling. The differential diagnosis includes lesions compressing the veins and causing a functional venous obstruction, venous stenosis, an infection causing edema, obstruction of previously functioning lymphatics, or the absence of sufficient lymphatic channels to ensure effective drainage. The following recommendations are made with the understanding that venous disease, specifically venous thrombosis, is the primary diagnosis to be excluded or confirmed in a patient presenting with unilateral upper-extremity swelling. Contrast venography remains the best reference-standard diagnostic test for suspected upper-extremity acute venous thrombosis and may be needed whenever other noninvasive strategies fail to adequately image the upper-extremity veins. Duplex, color flow, and compression ultrasound have also established a clear role in evaluation of the more peripheral veins that are accessible to sonography. Gadolinium contrast-enhanced MRI is routinely used to evaluate the status of the central veins. Delayed CT venography can often be used to confirm or exclude more central vein venous thrombi, although substantial contrast loads are required. The ACR Appropriateness Criteria(®) are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.", "title": "" }, { "docid": "eb6643fba28b6b84b4d51a565fc97be0", "text": "The spiral antenna is a well known kind of wideband antenna. The challenges to improve its design are numerous, such as creating a compact wideband matched feeding or controlling the radiation pattern. Here we propose a self matched and compact slot spiral antenna providing a unidirectional pattern.", "title": "" } ]
scidocsrr
adfbcfeacce9b78d0ea346b8d9b3fb52
Map-supervised road detection
[ { "docid": "add9821c4680fab8ad8dfacd8ca4236e", "text": "In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.", "title": "" }, { "docid": "fa88e0d0610f60522fc1140b39fc2972", "text": "The majority of current image-based road following algorithms operate, at least in part, by assuming the presence of structural or visual cues unique to the roadway. As a result, these algorithms are poorly suited to the task of tracking unstructured roads typical in desert environments. In this paper, we propose a road following algorithm that operates in a selfsupervised learning regime, allowing it to adapt to changing road conditions while making no assumptions about the general structure or appearance of the road surface. An application of optical flow techniques, paired with one-dimensional template matching, allows identification of regions in the current camera image that closely resemble the learned appearance of the road in the recent past. The algorithm assumes the vehicle lies on the road in order to form templates of the road’s appearance. A dynamic programming variant is then applied to optimize the 1-D template match results while enforcing a constraint on the maximum road curvature expected. Algorithm output images, as well as quantitative results, are presented for three distinct road types encountered in actual driving video acquired in the California Mojave Desert.", "title": "" } ]
[ { "docid": "745bbe075634f40e6c66716a6b877619", "text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.", "title": "" }, { "docid": "4ba95fbd89f88bdd6277eff955681d65", "text": "In this paper, new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for the future fifth generation (5G) short-range wireless communications applications is presented. This array antenna is proposed and designed with a standard printed circuit board (PCB) process to be suitable for integration with radio-frequency/microwave circuitry. The proposed structure employs four circular shaped DD patch radiator antenna elements fed by a l-to-4 Wilkinson power divider surrounded by an electromagnetic bandgap (EBG) structure. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The calculated impedance bandwidth of proposed array antenna ranges from 27.1 GHz to 29.5 GHz for reflection coefficient (Sn) less than -1OdB. The proposed design exhibits good stable radiation patterns over the whole frequency band of interest with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications.", "title": "" }, { "docid": "2e93d2ba94e0c468634bf99be76706bb", "text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.", "title": "" }, { "docid": "853375477bf531499067eedfe64e6e2d", "text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.", "title": "" }, { "docid": "8bc221213edc863f8cba6f9f5d9a9be0", "text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.", "title": "" }, { "docid": "3a5d43d86d39966aca2d93d1cf66b13d", "text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.", "title": "" }, { "docid": "33b37422ace8a300d53d4896de6bbb6f", "text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.", "title": "" }, { "docid": "cb0021ec58487e3dabc445f75918c974", "text": "This document includes supplementary material for the semi-supervised approach towards framesemantic parsing for unknown predicates (Das and Smith, 2011). We include the names of the test documents used in the study, plot the results for framesemantic parsing while varying the hyperparameter that is used to determine the number of top frames to be selected from the posterior distribution over each target of a constructed graph and argue why the semi-supervised self-training baseline did not perform well on the task.", "title": "" }, { "docid": "90bf5834a6e78ed946a6c898f1c1905e", "text": "Many grid connected power electronic systems, such as STATCOMs, UPFCs, and distributed generation system interfaces, use a voltage source inverter (VSI) connected to the supply network through a filter. This filter, typically a series inductance, acts to reduce the switching harmonics entering the distribution network. An alternative filter is a LCL network, which can achieve reduced levels of harmonic distortion at lower switching frequencies and with less inductance, and therefore has potential benefits for higher power applications. However, systems incorporating LCL filters require more complex control strategies and are not commonly presented in literature. This paper proposes a robust strategy for regulating the grid current entering a distribution network from a three-phase VSI system connected via a LCL filter. The strategy integrates an outer loop grid current regulator with inner capacitor current regulation to stabilize the system. A synchronous frame PI current regulation strategy is used for the outer grid current control loop. Linear analysis, simulation, and experimental results are used to verify the stability of the control algorithm across a range of operating conditions. Finally, expressions for “harmonic impedance” of the system are derived to study the effects of supply voltage distortion on the harmonic performance of the system.", "title": "" }, { "docid": "c1c9f0a61b8ec92d4904fa0fd84a4073", "text": "This work presents a Brain-Computer Interface (BCI) based on the Steady-State Visual Evoked Potential (SSVEP) that can discriminate four classes once per second. A statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency. Designed according such approach, volunteers were capable to online operate a BCI with hit rates varying from 60% to 100%. Moreover, one of the volunteers could guide a robotic wheelchair through an indoor environment using such BCI. As an additional feature, such BCI incorporates a visual feedback, which is essential for improving the performance of the whole system. All of this aspects allow to use this BCI to command a robotic wheelchair efficiently.", "title": "" }, { "docid": "909d9d1b9054586afc4b303e94acae73", "text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.", "title": "" }, { "docid": "fd29a4adc5eba8025da48eb174bc0817", "text": "Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.", "title": "" }, { "docid": "0a5ae1eb45404d6a42678e955c23116c", "text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.", "title": "" }, { "docid": "c1cdc9bb29660e910ccead445bcc896d", "text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.", "title": "" }, { "docid": "6a6bd93714e6e77a7b9834e8efee943a", "text": "Many information systems involve data about people. In order to reliably associate data with particular individuals, it is necessary that an effective and efficient identification scheme be established and maintained. There is remarkably little in the information technology literature concerning human identification. This paper seeks to overcome that deficiency, by undertaking a survey of human identity and human identification. The techniques discussed include names, codes, knowledge-based and token-based id, and biometrics. The key challenge to management is identified as being to devise a scheme which is practicable and economic, and of sufficiently high integrity to address the risks the organisation confronts in its dealings with people. It is proposed that much greater use be made of schemes which are designed to afford people anonymity, or enable them to use multiple identities or pseudonyms, while at the same time protecting the organisation's own interests. Multi-purpose and inhabitant registration schemes are described, and the recurrence of proposals to implement and extent them is noted. Public policy issues are identified. Of especial concern is the threat to personal privacy that the general-purpose use of an inhabitant registrant scheme represents. It is speculated that, where such schemes are pursued energetically, the reaction may be strong enough to threaten the social fabric.", "title": "" }, { "docid": "cbc6bd586889561cc38696f758ad97d2", "text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.", "title": "" }, { "docid": "0f5511aaed3d6627671a5e9f68df422a", "text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.", "title": "" }, { "docid": "5dcbebce421097f887f43669e1294b6f", "text": "The paper syncretizes the fundamental concept of the Sea Computing model in Internet of Things and the routing protocol of the wireless sensor network, and proposes a new routing protocol CASCR (Context-Awareness in Sea Computing Routing Protocol) for Internet of Things, based on context-awareness which belongs to the key technologies of Internet of Things. Furthermore, the paper describes the details on the protocol in the work flow, data structure and quantitative algorithm and so on. Finally, the simulation is given to analyze the work performance of the protocol CASCR. Theoretical analysis and experiment verify that CASCR has higher energy efficient and longer lifetime than the congeneric protocols. The paper enriches the theoretical foundation and makes some contribution for wireless sensor network transiting to Internet of Things in this research phase.", "title": "" }, { "docid": "c581d1300bf07663fcfd8c704450db09", "text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5fc8afbe7d55af3274d849d1576d3b13", "text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.", "title": "" } ]
scidocsrr
8c6fc852e3da449c0d2023434f4e7e03
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "5dca1e55bd6475ff352db61580dec807", "text": "Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as “WAGE” to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.", "title": "" }, { "docid": "6fc6167d1ef6b96d239fea03b9653865", "text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.", "title": "" } ]
[ { "docid": "d35623e1c73a30c2879a1750df295246", "text": "Online human textual interaction often carries important emotional meanings inaccessible to computers. We propose an approach to textual emotion recognition in the context of computer-mediated communication. The proposed recognition approach works at the sentence level and uses the standard Ekman emotion classification. It is grounded in a refined keyword-spotting method that employs: a WordNet-based word lexicon, a lexicon of emoticons, common abbreviations and colloquialisms, and a set of heuristic rules. The approach is implemented through the Synesketch software system. Synesketch is published as a free, open source software library. Several Synesketch-based applications presented in the paper, such as the the emotional visual chat, stress the practical value of the approach. Finally, the evaluation of the proposed emotion recognition algorithm shows high accuracy and promising results for future research and applications.", "title": "" }, { "docid": "129e01910a1798c69d01d0642a4f6bf4", "text": "We show that Tobin's q, as proxied by the ratio of the firm's market value to its book value, increases with the firm's systematic equity risk and falls with the firm's unsystematic equity risk. Further, an increase in the firm's total equity risk is associated with a fall in q. The negative relation between the change in total risk and the change in q is robust through time for the whole sample, but it does not hold for the largest firms.", "title": "" }, { "docid": "5cc3d79d7bd762e8cfd9df658acae3fc", "text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.", "title": "" }, { "docid": "28ba4e921cb942c8022c315561abf526", "text": "Metamaterials have attracted more and more research attentions recently. Metamaterials for electromagnetic applications consist of sub-wavelength structures designed to exhibit particular responses to an incident EM (electromagnetic) wave. Traditional EM (electromagnetic) metamaterial is constructed from thick and rigid structures, with the form-factor suitable for applications only in higher frequencies (above GHz) in microwave band. In this paper, we developed a thin and flexible metamaterial structure with small-scale unit cell that gives EM metamaterials far greater flexibility in numerous applications. By incorporating ferrite materials, the thickness and size of the unit cell of metamaterials have been effectively scaled down. The design, mechanism and development of flexible ferrite loaded metamaterials for microwave applications is described, with simulation as well as measurements. Experiments show that the ferrite film with permeability of 10 could reduce the resonant frequency. The thickness of the final metamaterials is only 0.3mm. This type of ferrite loaded metamaterials offers opportunities for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces.", "title": "" }, { "docid": "68c1cf9be287d2ccbe8c9c2ed675b39e", "text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS", "title": "" }, { "docid": "e8366d4e7f59fc32da001d3513cf8eee", "text": "Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.", "title": "" }, { "docid": "724388aac829af9671a90793b1b31197", "text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.", "title": "" }, { "docid": "d3501679c9652df1faaaff4c391be567", "text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.", "title": "" }, { "docid": "23ff4a40f9a62c8a26f3cc3f8025113d", "text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.", "title": "" }, { "docid": "00bcce935ca2e4d443941b7e90d644c9", "text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.", "title": "" }, { "docid": "0c57dd3ce1f122d3eb11a98649880475", "text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.", "title": "" }, { "docid": "e0f89b22f215c140f69a22e6b573df41", "text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.", "title": "" }, { "docid": "759831bb109706b6963b21984a59d2d1", "text": "Workflow management systems will change the architecture of future information systems dramatically. The explicit representation of business procedures is one of the main issues when introducing a workflow management system. In this paper we focus on a class of Petri nets suitable for the representation, validation and verification of these procedures. We will show that the correctness of a procedure represented by such a Petri net can be verified by using standard Petri-net-based techniques. Based on this result we provide a comprehensive set of transformation rules which can be used to construct and modify correct procedures.", "title": "" }, { "docid": "a7287ea0f78500670fb32fc874968c54", "text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.", "title": "" }, { "docid": "477be87ed75b8245de5e084a366b7a6d", "text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "ce7d164774826897e9d7386ec9159bba", "text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.", "title": "" } ]
scidocsrr
819cab6856ab332744e87d70cdd04247
A Supervised Patch-Based Approach for Human Brain Labeling
[ { "docid": "3342e2f79a6bb555797224ac4738e768", "text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.", "title": "" }, { "docid": "6df12ee53551f4a3bd03bca4ca545bf1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" } ]
[ { "docid": "097e2c17a34db96ba37f68e28058ceba", "text": "ARTICLE The healing properties of compassion have been written about for centuries. The Dalai Lama often stresses that if you want others to be happy – focus on compassion; if you want to be happy yourself – focus on compassion (Dalai Lama 1995, 2001). Although all clinicians agree that compassion is central to the doctor–patient and therapist–client relationship, recently the components of com­ passion have been looked at through the lens of Western psychological science and research 2003a,b). Compassion can be thought of as a skill that one can train in, with in creasing evidence that focusing on and practising com passion can influence neurophysiological and immune systems (Davidson 2003; Lutz 2008). Compassion­focused therapy refers to the under pinning theory and process of applying a compassion model to psy­ chotherapy. Compassionate mind training refers to specific activities designed to develop compassion­ ate attributes and skills, particularly those that influence affect regula tion. Compassion­focused therapy adopts the philosophy that our under­ standing of psychological and neurophysiological processes is developing at such a rapid pace that we are now moving beyond 'schools of psychotherapy' towards a more integrated, biopsycho social science of psycho therapy (Gilbert 2009). Compassion­focused therapy and compassionate mind training arose from a number of observations. First, people with high levels of shame and self­ criticism can have enormous difficulty in being kind to themselves, feeling self­warmth or being self­compassionate. Second, it has long been known that problems of shame and self­criticism are often rooted in histories of abuse, bullying, high expressed emo­ tion in the family, neglect and/or lack of affection Individuals subjected to early experiences of this type can become highly sensitive to threats of rejection or criticism from the outside world and can quickly become self­attacking: they experience both their external and internal worlds as easily turning hostile. Third, it has been recognised that working with shame and self­criticism requires a thera peutic focus on memories of such early experiences And fourth, there are clients who engage with the cognitive and behavioural tasks of a therapy, and become skilled at generating (say) alternatives for their negative thoughts and beliefs, but who still do poorly in therapy (Rector 2000). They are likely to say, 'I understand the logic of my alterna­ tive thinking but it doesn't really help me feel much better' or 'I know I'm not to blame for the abuse but I still feel that I …", "title": "" }, { "docid": "660fe15405c2006e20bcf0e4358c7283", "text": "We introduce a framework for feature selection based on depe ndence maximization between the selected features and the labels of an estimation problem, u sing the Hilbert-Schmidt Independence Criterion. The key idea is that good features should be highl y dependent on the labels. Our approach leads to a greedy procedure for feature selection. We show that a number of existing feature selectors are special cases of this framework. Experiments on both artificial and real-world data show that our feature selector works well in practice.", "title": "" }, { "docid": "c6a25dc466e4a22351359f17bd29916c", "text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1", "title": "" }, { "docid": "7ffbc12161510aa8ef01d804df9c5648", "text": "Networks represent relationships between entities in many complex systems, spanning from online social interactions to biological cell development and brain connectivity. In many cases, relationships between entities are unambiguously known: are two users “friends” in a social network? Do two researchers collaborate on a published article? Do two road segments in a transportation system intersect? These are directly observable in the system in question. In most cases, relationships between nodes are not directly observable and must be inferred: Does one gene regulate the expression of another? Do two animals who physically co-locate have a social bond? Who infected whom in a disease outbreak in a population?\n Existing approaches for inferring networks from data are found across many application domains and use specialized knowledge to infer and measure the quality of inferred network for a specific task or hypothesis. However, current research lacks a rigorous methodology that employs standard statistical validation on inferred models. In this survey, we examine (1) how network representations are constructed from underlying data, (2) the variety of questions and tasks on these representations over several domains, and (3) validation strategies for measuring the inferred network’s capability of answering questions on the system of interest.", "title": "" }, { "docid": "ef8d88d57858706ba269a8f3aaa989f3", "text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.", "title": "" }, { "docid": "43fc8ff9339780cc91762a28e36aaad7", "text": "The Internet of things(IoT) has brought the vision of the smarter world into reality and including healthcare it has a many application domains. The convergence of IOT-cloud can play a significant role in the smart healthcare by offering better insight of healthcare content to support affordable and quality patient care. In this paper, we proposed a model that allows the sensor to monitor the patient's symptom. The collected monitored data transmitted to the gateway via Bluetooth and then to the cloud server through docker container using the internet. Thus enabling the physician to diagnose and monitor health problems wherever the patient is. Also, we address the several challenges related to health monitoring and management using IoT.", "title": "" }, { "docid": "a1fe64aacbbe80a259feee2874645f09", "text": "Database consolidation is gaining wide acceptance as a means to reduce the cost and complexity of managing database systems. However, this new trend poses many interesting challenges for understanding and predicting system performance. The consolidated databases in multi-tenant settings share resources and compete with each other for these resources. In this work we present an experimental study to highlight how these interactions can be fairly complex. We argue that individual database staging or workload profiling is not an adequate approach to understanding the performance of the consolidated system. Our initial investigations suggest that machine learning approaches that use monitored data to model the system can work well for important tasks.", "title": "" }, { "docid": "39cde8c4da81d72d7a0ff058edb71409", "text": "One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such aComplex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines . We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels.", "title": "" }, { "docid": "231365d1de30f3529752510ec718dd38", "text": "The lack of reliability of gliding contacts in highly constrained environments induces manufacturers to develop contactless transmission power systems such as rotary transformers. The following paper proposes an optimal design methodology for rotary transformers supplied from a low-voltage source at high temperatures. The method is based on an accurate multidisciplinary analysis model divided into magnetic, thermal and electrical parts, optimized thanks to a sequential quadratic programming method. The technique is used to discuss the design particularities of rotary transformers. Two optimally designed structures of rotary transformers : an iron silicon coaxial one and a ferrite pot core one, are compared.", "title": "" }, { "docid": "e94183f4200b8c6fef1f18ec0e340869", "text": "Hoon Sohn Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C926 Los Alamos National Laboratory, Los Alamos, NM 87545 e-mail: sohn@lanl.gov Charles R. Farrar Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C946 e-mail: farrar@lanl.gov Norman F. Hunter Engineering Sciences & Applications Division, Measurement Technology Group, M/S C931 e-mail: hunter@lanl.gov Keith Worden Department of Mechanical Engineering University of Sheffield Mappin St. Sheffield S1 3JD, United Kingdom e-mail: k.worden@sheffield.ac.uk", "title": "" }, { "docid": "e677799d3bee1b25e74dc6c547c1b6c2", "text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.", "title": "" }, { "docid": "daac9ee402eebc650fe4f98328a7965d", "text": "5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489", "title": "" }, { "docid": "96d90b5e2046b4629f1625649256ecaa", "text": "Today's smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users' privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked.\n To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.", "title": "" }, { "docid": "a5e960a4b20959a1b4a85e08eebab9d3", "text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.", "title": "" }, { "docid": "b2e02a1818f862357cf5764afa7fa197", "text": "The goal of this paper is the automatic identification of characters in TV and feature film material. In contrast to standard approaches to this task, which rely on the weak supervision afforded by transcripts and subtitles, we propose a new method requiring only a cast list. This list is used to obtain images of actors from freely available sources on the web, providing a form of partial supervision for this task. In using images of actors to recognize characters, we make the following three contributions: (i) We demonstrate that an automated semi-supervised learning approach is able to adapt from the actor’s face to the character’s face, including the face context of the hair; (ii) By building voice models for every character, we provide a bridge between frontal faces (for which there is plenty of actor-level supervision) and profile (for which there is very little or none); and (iii) by combining face context and speaker identification, we are able to identify characters with partially occluded faces and extreme facial poses. Results are presented on the TV series ‘Sherlock’ and the feature film ‘Casablanca’. We achieve the state-of-the-art on the Casablanca benchmark, surpassing previous methods that have used the stronger supervision available from transcripts.", "title": "" }, { "docid": "b9d25bdbb337a9d16a24fa731b6b479d", "text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.", "title": "" }, { "docid": "afce201838e658aac3e18c2f26cff956", "text": "With the current set of design tools and methods available to game designers, vast portions of the space of possible games are not currently reachable. In the past, technological advances such as improved graphics and new controllers have driven the creation of new forms of gameplay, but games have still not made great strides into new gameplay experiences. We argue that the development of innovative artificial intelligence (AI) systems plays a crucial role in the exploration of currently unreachable spaces. To aid in exploration, we suggest a practice called AI-based game design, an iterative design process that deeply integrates the affordances of an AI system within the context of game design. We have applied this process in our own projects, and in this paper we present how it has pushed the boundaries of current game genres and experiences, as well as discuss the future AI-based game design.", "title": "" }, { "docid": "37e552e4352cd5f8c76dcefd856e0fc8", "text": "Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device’s owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls’ sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F -measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.", "title": "" }, { "docid": "eb7eb6777a68fd594e2e94ac3cba6be9", "text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.", "title": "" }, { "docid": "036cbf58561de8bfa01ddc4fa8d7b8f2", "text": "The purpose of this paper is to discover a semi-optimal set of trading rules and to investigate its effectiveness as applied to Egyptian Stocks. The aim is to mix different categories of technical trading rules and let an automatic evolution process decide which rules are to be used for particular time series. This difficult task can be achieved by using genetic algorithms (GA's), they permit the creation of artificial experts taking their decisions from an optimal subset of the a given set of trading rules. The GA's based on the survival of the fittest, do not guarantee a global optimum but they are known to constitute an effective approach in optimizing non-linear functions. Selected liquid stocks are tested and GA trading rules were compared with other conventional and well known technical analysis rules. The Proposed GA system showed clear better average profit and in the same high sharpe ratio, which indicates not only good profitability but also better risk-reward trade-off", "title": "" } ]
scidocsrr
1ccc0bff27f008ea979adef174ec6e93
Authenticated Key Exchange over Bitcoin
[ { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" } ]
[ { "docid": "21031b55206dd330852b8d11e8e6a84a", "text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.", "title": "" }, { "docid": "d8e7c9b871f542cd40835b131eedb60a", "text": "Attribute-based encryption (ABE) systems allow encrypting to uncertain receivers by means of an access policy specifying the attributes that the intended receivers should possess. ABE promises to deliver fine-grained access control of encrypted data. However, when data are encrypted using an ABE scheme, key management is difficult if there is a large number of users from various backgrounds. In this paper, we elaborate ABE and propose a new versatile cryptosystem referred to as ciphertext-policy hierarchical ABE (CPHABE). In a CP-HABE scheme, the attributes are organized in a matrix and the users having higher-level attributes can delegate their access rights to the users at a lower level. These features enable a CP-HABE system to host a large number of users from different organizations by delegating keys, e.g., enabling efficient data sharing among hierarchically organized large groups. We construct a CP-HABE scheme with short ciphertexts. The scheme is proven secure in the standard model under non-interactive assumptions.", "title": "" }, { "docid": "d8190669434b167500312091d1a4bf30", "text": "Path analysis was used to test the predictive and mediational role of self-efficacy beliefs in mathematical problem solving. Results revealed that math self-efficacy was more predictive of problem solving than was math self-concept, perceived usefulness of mathematics, prior experience with mathematics, or gender (N = 350). Self-efficacy also mediated the effect of gender and prior experience on self-concept, perceived usefulness, and problem solving. Gender and prior experience influenced self-concept, perceived usefulness, and problem solving largely through the mediational role of self-efficacy. Men had higher performance, self-efficacy, and self-concept and lower anxiety, but these differences were due largely to the influence of self-efficacy, for gender had a direct effect only on self-efficacy and a prior experience variable. Results support the hypothesized role of self-efficacy in A. Bandura's (1986) social cognitive theory.", "title": "" }, { "docid": "05bc787d000ecf26c8185b084f8d2498", "text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling", "title": "" }, { "docid": "fa0883f4adf79c65a6c13c992ae08b3f", "text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.", "title": "" }, { "docid": "4f6c7e299b8c7e34778d5c7c10e5a034", "text": "This study presents an online multiparameter estimation scheme for interior permanent magnet motor drives that exploits the switching ripple of finite control set (FCS) model predictive control (MPC). The combinations consist of two, three, and four parameters are analysed for observability at different operating states. Most of the combinations are rank deficient without persistent excitation (PE) of the system, e.g. by signal injection. This study shows that high frequency current ripples by MPC with FCS are sufficient to create PE in the system. This study also analyses parameter coupling in estimation that results in wrong convergence and propose a decoupling technique. The observability conditions for all the combinations are experimentally validated. Finally, a full parameter estimation along with the decoupling technique is tested at different operating conditions.", "title": "" }, { "docid": "5ba721a06c17731458ef1ecb6584b311", "text": "BACKGROUND\nPrimary and tension-free closure of a flap is often required after particular surgical procedures (e.g., guided bone regeneration). Other times, flap advancement may be desired for situations such as root coverage.\n\n\nMETHODS\nThe literature was searched for articles that addressed techniques, limitations, and complications associated with flap advancement. These articles were used as background information. In addition, reference information regarding anatomy was cited as necessary to help describe surgical procedures.\n\n\nRESULTS\nThis article describes techniques to advance mucoperiosteal flaps, which facilitate healing. Methods are presented for a variety of treatment scenarios, ranging from minor to major coronal tissue advancement. Anatomic landmarks are identified that need to be considered during surgery. In addition, management of complications associated with flap advancement is discussed.\n\n\nCONCLUSIONS\nTension-free primary closure is attainable. The technique is dependent on the extent that the flap needs to be advanced.", "title": "" }, { "docid": "bb02c3a2c02cce6325fe542f006dde9c", "text": "In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.", "title": "" }, { "docid": "9098d40a9e16a1bd1ed0a9edd96f3258", "text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.", "title": "" }, { "docid": "50b316a52bdfacd5fe319818d0b22962", "text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.", "title": "" }, { "docid": "ef8292e79b8c9f463281f2a9c5c410ef", "text": "In real-time applications, the computer is often required to service programs in response to external signals, and to guarantee that each such program is completely processed within a specified interval following the occurrence of the initiating signal. Such programs are referred to in this paper as time-critical processes, or TCPs.", "title": "" }, { "docid": "0e9e6c1f21432df9dfac2e7205105d46", "text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.", "title": "" }, { "docid": "e9b8787e5bb1f099e914db890e04dc23", "text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.", "title": "" }, { "docid": "1abcf9480879b3d29072f09d5be8609d", "text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.", "title": "" }, { "docid": "1f6e92bc8239e358e8278d13ced4a0a9", "text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.", "title": "" }, { "docid": "1106cd6413b478fd32d250458a2233c5", "text": "Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013 Abstract: This article reviews the common used forecast error measurements. All error measurements have been joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors, measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only one measure, the recommendations for selecting the appropriate error measurements are given.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "0cd2da131bf78526c890dae72514a8f0", "text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4172a0c101756ea8207b65b0dfbbe8ce", "text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3", "title": "" } ]
scidocsrr