query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
898e5eea174ad3804fed1f09e0dc820a
|
The Critical Importance of Retrieval--and Spacing--for Learning.
|
[
{
"docid": "aca43e4fa3ad889aca212783a0984454",
"text": "Two studies examined undergraduates' metacognitive awareness of six empirically-supported learning strategies. Study 1 results overall suggested an inability to predict the learning outcomes of educational scenarios describing the strategies of dual-coding, static-media presentations, low-interest extraneous details, testing, and spacing; there was, however, weak endorsement of the strategy of generating one's own study materials. In addition, an independent measure of metacognitive self-regulation was correlated with scenario performance. Study 2 demonstrated higher prediction accuracy for students who had received targeted instruction on applied memory topics in their psychology courses, and the best performance for those students directly exposed to the original empirical studies from which the scenarios were derived. In sum, this research suggests that undergraduates are largely unaware of several specific strategies that could benefit memory for course information; further, training in applied learning and memory topics has the potential to improve metacognitive judgments in these domains.",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "3d3e728e5587fe9fd686fca09a6a06f4",
"text": "Knowing how to manage one's own learning has become increasingly important in recent years, as both the need and the opportunities for individuals to learn on their own outside of formal classroom settings have grown. During that same period, however, research on learning, memory, and metacognitive processes has provided evidence that people often have a faulty mental model of how they learn and remember, making them prone to both misassessing and mismanaging their own learning. After a discussion of what learners need to understand in order to become effective stewards of their own learning, we first review research on what people believe about how they learn and then review research on how people's ongoing assessments of their own learning are influenced by current performance and the subjective sense of fluency. We conclude with a discussion of societal assumptions and attitudes that can be counterproductive in terms of individuals becoming maximally effective learners.",
"title": ""
},
{
"docid": "db28ae27e5c88f995c61d94f3bfcc4da",
"text": "Testing in school is usually done for purposes of assessment, to assign students grades (from tests in classrooms) or rank them in terms of abilities (in standardized tests). Yet tests can serve other purposes in educational settings that greatly improve performance; this chapter reviews 10 other benefits of testing. Retrieval practice occurring during tests can greatly enhance retention of the retrieved information (relative to no testing or even to restudying). Furthermore, besides its durability, such repeated retrieval produces knowledge that can be retrieved flexibly and transferred to other situations. On open-ended assessments (such as essay tests), retrieval practice required by tests can help students organize information and form a coherent knowledge base. Retrieval of some information on a test can also lead to easier retrieval of related information, at least on PsychologyofLearningandMotivation, Volume 55 # 2011 Elsevier Inc. ISSN 0079-7421, DOI 10.1016/B978-0-12-387691-1.00001-6 All rights reserved.",
"title": ""
},
{
"docid": "2490ad05628f62881e16338914135d17",
"text": "The authors examined the hypothesis that judgments of learning (JOL), if governed by processing fluency during encoding, should be insensitive to the anticipated retention interval. Indeed, neither item-by-item nor aggregate JOLs exhibited \"forgetting\" unless participants were asked to estimate recall rates for several different retention intervals, in which case their estimates mimicked closely actual recall rates. These results and others reported suggest that participants can access their knowledge about forgetting but only when theory-based predictions are made, and then only when the notion of forgetting is accentuated either by manipulating retention interval within individuals or by framing recall predictions in terms of forgetting rather than remembering. The authors interpret their findings in terms of the distinction between experience-based and theory-based JOLs.",
"title": ""
},
{
"docid": "2c853123a29d27c3713c8159d13c3728",
"text": "Retrieval practice is a potent technique for enhancing learning, but how often do students practice retrieval when they regulate their own learning? In 4 experiments the subjects learned foreign-language items across multiple study and test periods. When items were assigned to be repeatedly tested, repeatedly studied, or removed after they were recalled, repeated retrieval produced powerful effects on learning and retention. However, when subjects were given control over their own learning and could choose to test, study, or remove items, many subjects chose to remove items rather than practice retrieval, leading to poor retention. In addition, when tests were inserted in the learning phase, attempting retrieval improved learning by enhancing subsequent encoding during study. But when students were given control over their learning they did not attempt retrieval as early or as often as they should to promote the best learning. The experiments identify a compelling metacognitive illusion that occurs during self-regulated learning: Once students can recall an item they tend to believe they have \"learned\" it. This leads students to terminate practice rather than practice retrieval, a strategy choice that ultimately results in poor retention.",
"title": ""
}
] |
[
{
"docid": "72f9891b711ebc261fc081a0b356c31b",
"text": "This paper presents a flat, high gain, wide scanning, broadband continuous transverse stub (CTS) array. The design procedure, the fabrication, and an exhaustive antenna characterization are described in details. The array comprises 16 radiating slots and is fed by a corporate-feed network in hollow parallel plate waveguide (PPW) technology. A pillbox-based linear source illuminates the corporate network and allows for beam steering. The antenna is designed by using an ad hoc mode matching code recently developed for CTS arrays, providing design guidelines. The assembly technique ensures the electrical contact among the various stages of the network without using any electromagnetic choke and any bonding process. The main beam of the antenna is mechanically steered over ±40° in elevation, by moving a compact horn within the focal plane of the pillbox feeding system. Excellent performances are achieved. The features of the beam are stable within the design 27.5-31 GHz band and beyond, in the entire Ka-band (26.5-40 GHz). An antenna gain of about 29 dBi is measured at broadside at 29.25 GHz and scan losses lower than 2 dB are reported at ±40°. The antenna efficiency exceeds 80% in the whole scan range. The very good agreement between measurements and simulations validates the design procedure. The proposed design is suitable for Satcom Ka-band terminals in moving platforms, e.g., trains and planes, and also for mobile ground stations, as a multibeam sectorial antenna.",
"title": ""
},
{
"docid": "8e654ace264f8062caee76b0a306738c",
"text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.",
"title": ""
},
{
"docid": "bb603491b2adbf26f1663a8567362ae1",
"text": "Nurses in an Armed Force Hospital (AFH) expose to stronger stress than those in a civil hospital, especially in an emergency department (ED). Ironically, stresses of these nurses received few if any attention in academic research in the past. This study collects 227 samples from the emergency departments of four armed force hospitals in central and southern Taiwan. The research indicates that the top five stressors are a massive casualty event, delayed physician support, overloads of routine work, overloads of assignments, and annoying paper work. Excessive work loading was found to be the primary source of stress. Nurses who were perceived to have greater stress levels were more inclined to deploy emotion-oriented approaches and more likely to seek job rotations. Professional stressors and problem-oriented approaches were positively correlated. Unlike other local studies, this study concludes that the excessive work-loading is more stressful in an AFH. Keywords—Emergency nurse; Job stressor; Coping behavior; Armed force hospital.",
"title": ""
},
{
"docid": "6f0d9f383c0142b43ea440e6efb2a59a",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "a5f80f6f36f8db1673ccc57de9044b5e",
"text": "Nowadays, many modern applications, e.g. autonomous system, and cloud data services need to capture and process a big amount of raw data at runtime that ultimately necessitates a high-performance computing model. Deep Neural Network (DNN) has already revealed its learning capabilities in runtime data processing for modern applications. However, DNNs are becoming more deep sophisticated models for gaining higher accuracy which require a remarkable computing capacity. Considering high-performance cloud infrastructure as a supplier of required computational throughput is often not feasible. Instead, we intend to find a near-sensor processing solution which will lower the need for network bandwidth and increase privacy and power efficiency, as well as guaranteeing worst-case response-times. Toward this goal, we introduce ADONN framework, which aims to automatically design a highly robust DNN architecture for embedded devices as the closest processing unit to the sensors. ADONN adroitly searches the design space to find improved neural architectures. Our proposed framework takes advantage of a multi-objective evolutionary approach, which exploits a pruned design space inspired by a dense architecture. Unlike recent works that mainly have tried to generate highly accurate networks, ADONN also considers the network size factor as the second objective to build a highly optimized network fitting with limited computational resource budgets while delivers comparable accuracy level. In comparison with the best result on CIFAR-10 dataset, a generated network by ADONN presents up to 26.4 compression rate while loses only 4% accuracy. In addition, ADONN maps the generated DNN on the commodity programmable devices including ARM Processor, High-Performance CPU, GPU, and FPGA.",
"title": ""
},
{
"docid": "5e756f85b15812daf80221c8b9ae6a96",
"text": "PURPOSE\nRural-dwelling cancer survivors (CSs) are at risk for decrements in health and well-being due to decreased access to health care and support resources. This study compares the impact of cancer in rural- and urban-dwelling adult CSs living in 2 regions of the Pacific Northwest.\n\n\nMETHODS\nA convenience sample of posttreatment adult CSs (N = 132) completed the Impact of Cancer version 2 (IOCv2) and the Memorial Symptom Assessment Scale-short form. High and low scorers on the IOCv2 participated in an in-depth interview (n = 19).\n\n\nFINDINGS\nThe sample was predominantly middle-aged (mean age 58) and female (84%). Mean time since treatment completion was 6.7 years. Cancer diagnoses represented included breast (56%), gynecologic (9%), lymphoma (8%), head and neck (6%), and colorectal (5%). Comparisons across geographic regions show statistically significant differences in body concerns, worry, negative impact, and employment concerns. Rural-urban differences from interview data include access to health care, care coordination, connecting/community, thinking about death and dying, public/private journey, and advocacy.\n\n\nCONCLUSION\nThe insights into the differences and similarities between rural and urban CSs challenge the prevalent assumptions about rural-dwelling CSs and their risk for negative outcomes. A common theme across the study findings was community. Access to health care may not be the driver of the survivorship experience. Findings can influence health care providers and survivorship program development, building on the strengths of both rural and urban living and the engagement of the survivorship community.",
"title": ""
},
{
"docid": "a910a28224ac10c8b4d2781a73849499",
"text": "The computing machine Z3, buHt by Konrad Zuse from 1938 to 1941, could only execute fixed sequences of floating-point arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. We show in this paper that a single program loop containing this type of instructions can simulate any Turing machine whose tape is of bounded size. This is achieved by simulating conditional branching and indirect addressing by purely arithmetical means. Zuse's Z3 is therefore, at least in principle, as universal as today's computers which have a bounded memory size. This result is achieved at the cost of blowing up the size of the program stored on punched tape. Universal Machines and Single Loops Nobody has ever built a universal computer. The reason is that a universal computer consists, in theory, of a fixed processor and a memory of unbounded size. This is the case of Turing machines with their unbounded tapes. In the theory of general recursive functions there is also a small set of rules and some predefined functions, but there is no upper bound on the size of intermediate reduction terms. Modern computers are only potentially universal: They can perform any computation that a Turing machine with a bounded tape can perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).",
"title": ""
},
{
"docid": "72a6001b54359139b565f0056bd0cfe2",
"text": "Porous CuO nanosheets were prepared on alumina tubes using a facile hydrothermal method, and their morphology, microstructure, and gas-sensing properties were investigated. The monoclinic CuO nanosheets had an average thickness of 62.5 nm and were embedded with numerous holes with diameters ranging from 5 to 17 nm. The porous CuO nanosheets were used to fabricate gas sensors to detect hydrogen sulfide (H2S) operating at room temperature. The sensor showed a good response sensitivity of 1.25 with respond/recovery times of 234 and 76 s, respectively, when tested with the H2S concentrations as low as 10 ppb. It also showed a remarkably high selectivity to the H2S, but only minor responses to other gases such as SO2, NO, NO2, H2, CO, and C2H5OH. The working principle of the porous CuO nanosheet based sensor to detect the H2S was identified to be the phase transition from semiconducting CuO to a metallic conducting CuS.",
"title": ""
},
{
"docid": "ccd663355ff6070b3668580150545cea",
"text": "In this paper, the user effects on mobile terminal antennas at 28 GHz are statistically investigated with the parameters of body loss, coverage efficiency, and power in the shadow. The data are obtained from the measurements of 12 users in data and talk modes, with the antenna placed on the top and bottom of the chassis. In the measurements, the users hold the phone naturally. The radiation patterns and shadowing regions are also studied. It is found that a significant amount of power can propagate into the shadow of the user by creeping waves and diffractions. A new metric is defined to characterize this phenomenon. A mean body loss of 3.2–4 dB is expected in talk mode, which is also similar to the data mode with the bottom antenna. A body loss of 1 dB is expected in data mode with the top antenna location. The variation of the body loss between the users at 28 GHz is less than 2 dB, which is much smaller than that of the conventional cellular bands below 3 GHz. The coverage efficiency is significantly reduced in talk mode, but only slightly affected in data mode.",
"title": ""
},
{
"docid": "717ea3390ffe3f3132d4e2230e645ee5",
"text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.",
"title": ""
},
{
"docid": "719c945e9f45371f8422648e0e81178f",
"text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey",
"title": ""
},
{
"docid": "f83f5eaa47f4634311297886b8e2228c",
"text": "Purpose of this study is to determine whether cash flow impacts business failure prediction using the BP models (Altman z-score, or Neural Network, or any of the BP models which could be implemented having objective to predict the financial distress or more complex financial failure-bankruptcy of the banks or companies). Units of analysis are financial ratios derived from raw financial data: B/S, P&L statements (income statements) and cash flow statements of both failed and non-failed companies/corporates that have been collected from the auditing resources and reports performed. A number of these studies examined whether a cash flow improve the prediction of business failure. The authors would have the objective to show the evidence and usefulness and efficacy of statistical models such as Altman Z-score discriminant analysis bankruptcy predictive models to assess client on going concern status. Failed and non-failed companies were selected for analysis to determine whether the cash flow improves the business failure prediction aiming to proof that the cash flow certainly makes better financial distress and bankruptcy prediction possible. Key-Words: bankruptcy prediction, financial distress, financial crisis, transition economy, auditing statement, balance sheet, profit and loss accounts, income statements",
"title": ""
},
{
"docid": "e373e44d5d4445ca56a45b4800b93740",
"text": "In recent years a great deal of research efforts in ship hydromechanics have been devoted to practical navigation problems in moving larger ships safely into existing harbours and inland waterways and to ease congestion in existing shipping routes. The starting point of any navigational or design analysis lies in the accurate determination of the hydrodynamic forces generated on the ship hull moving in confined waters. The analysis of such ship motion should include the effects of shallow water. An area of particular interest is the determination of ship resistance in shallow or restricted waters at different speeds, forming the basis for the power calculation and design of the propulsion system. The present work describes the implementation of CFD techniques for determining the shallow water resistance of a river-sea ship at different speeds. The ship hull flow is analysed for different ship speeds in shallow water conditions. The results obtained from CFD analysis are compared with available standard results.",
"title": ""
},
{
"docid": "7cb61609adf6e3c56c762d6fe322903c",
"text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "13e8fd8e8462e4bbb267f909403f9872",
"text": "Ergative case, the special case of transitive subjects, rai ses questions not only for the theory of case but also for theories of subjectho od and transitivity. This paper analyzes the case system of Nez Perce, a ”three-way erg tiv ” language, with an eye towards a formalization of the category of transitive subject . I show that it is object agreement that is determinative of transitivity, an d hence of ergative case, in Nez Perce. I further show that the transitivity condition on ergative case must be coupled with a criterion of subjecthood that makes reference to participation in subject agreement, not just to origin in a high argument-structural position. These two results suggest a formalization of the transitive subject as that ar gument uniquely accessing both high and low agreement information, the former through its (agreement-derived) connection with T and the latter through its origin in the spe cifi r of a head associated with object agreement (v). In view of these findings, I ar gue that ergative case morphology should be analyzed not as the expression of a synt ctic primitive but as the morphological spell-out of subject agreement and objec t agreement on a nominal.",
"title": ""
},
{
"docid": "25bd1930de4141a4e80441d7a1ae5b89",
"text": "Since the release of Bitcoins as crypto currency, Bitcoin has played a prominent part in the media. However, not Bitcoin but the underlying technology blockchain offers the possibility to innovatively change industries. The decentralized structure of the blockchain is particularly suitable for implementing control and business processes in microgrids, using smart contracts and decentralized applications. This paper provides a state of the art survey overview of current blockchain technology based projects with the potential to revolutionize microgrids and provides a first attempt to technically characterize different start-up approaches. The most promising use case from the microgrid perspective is peer-to-peer trading, where energy is exchanged and traded locally between consumers and prosumers. An application concept for distributed PV generation is provided in this promising area.",
"title": ""
},
{
"docid": "b5f22614e5cd76a66b754fd79299493a",
"text": "We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time \"twist\": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of \"big data\". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production today, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a \"big data\" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle \"big\" as well as \"fast\" data.",
"title": ""
},
{
"docid": "22445127362a9a2b16521a4a48f24686",
"text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.",
"title": ""
},
{
"docid": "04a074377c86a19f1d429704ee6ff3f3",
"text": "The nature of wireless network transmission and the emerging attacks are continuously creating or exploiting more vulnerabilities. Despite the fact that the security mechanisms and protocols are constantly upgraded and enhanced, the Small Office/Home Office (SOHO) environments that cannot afford a separate authentication system, and generally adopt the IEEE 802.11 Wi-Fi-Protected-Access-2/Pre-Shared-Key (WPA2-PSK) are still exposed to some attack categories such as de-authentication attacks that aim to push wireless client to re-authenticate to the Access Point (AP) and try to capture the keys exchanged during the handshake to compromise the network security. This kind of attack is impossible to detect or prevent in spite of having an Intrusion Detection and Prevention System (IDPS) installed on the client or on the AP, especially when the attack is not repetitive and is targeting only one client. This paper proposes a novel method which can mitigate and eliminate the risk of exposing the PSK to be captured during the re-authentication process by introducing a novel re-authentication protocol relying on an enhanced four-way handshake which does not require any hardware upgrade or heavy-weight cryptography affecting the network flexibility and performances.",
"title": ""
}
] |
scidocsrr
|
062f7684afddc733806155de5506fbd2
|
Recurrent neural network training with dark knowledge transfer
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "db433a01dd2a2fd80580ffac05601f70",
"text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.",
"title": ""
},
{
"docid": "46fba65ad6ad888bb3908d75f0bcc029",
"text": "Deep neural network (DNN) obtains significant accuracy improvements on many speech recognition tasks and its power comes from the deep and wide network structure with a very large number of parameters. It becomes challenging when we deploy DNN on devices which have limited computational and storage resources. The common practice is to train a DNN with a small number of hidden nodes and a small senone set using the standard training process, leading to significant accuracy loss. In this study, we propose to better address these issues by utilizing the DNN output distribution. To learn a DNN with small number of hidden nodes, we minimize the Kullback–Leibler divergence between the output distributions of the small-size DNN and a standard large-size DNN by utilizing a large number of un-transcribed data. For better senone set generation, we cluster the senones in the large set into a small one by directly relating the clustering process to DNN parameters, as opposed to decoupling the senone generation and DNN training process in the standard training. Evaluated on a short message dictation task, the proposed two methods get 5.08% and 1.33% relative word error rate reduction from the standard training method, respectively.",
"title": ""
}
] |
[
{
"docid": "8b067b1115d4bc7c8656564bc6963d7b",
"text": "Sentence Function: Indicating the conversational purpose of speakers • Interrogative: Acquire further information from the user • Imperative: Make requests, instructions or invitations to elicit further information • Declarative: Make statements to state or explain something Response Generation Task with Specified Sentence Function • Global Control: Plan different types of words globally • Compatibility: Controllable sentence function + informative content",
"title": ""
},
{
"docid": "16c58710e1285a55d75f996c2816b9b0",
"text": "Face morphing is an effect that shows a transition from one face image to another face image smoothly. It has been widely used in various fields of work, such as animation, movie production, games, and mobile applications. Two types of methods have been used to conduct face morphing. Semi automatic mapping methods, which allow users to map corresponding pixels between two face images, can produce a smooth transition of result images. Mapping the corresponding pixel between two human face images is usually not trivial. Fully automatic methods have also been proposed for morphing between two images having similar face properties, where the results depend on the similarity of the input face images. In this project, we apply a critical point filter to determine facial features for automatically mapping the correspondence of the input face images. The critical point filters can be used to extract the main features of input face images, including color, position and edge of each facial component in the input images. An energy function is also proposed for mapping the corresponding pixels between pixels of the input face images. The experimental results show that position of each face component plays a more important role than the edge and color of the face. We can summarize that, using the critical point filter, the proposed method to generate face morphing can produce a smooth image transition with our adjusted weight function.",
"title": ""
},
{
"docid": "969b49b20271f2714ad96d739bf79f08",
"text": "Control of a robot manipulator in contact with the environment is usually conducted by the direct feedback control system using a force-torque sensor or the indirect impedance control scheme. Although these methods have been successfully applied to many applications, simultaneous control of force and position cannot be achieved. Furthermore, collision safety has been of primary concern in recent years with emergence of service robots in direct contact with humans. To cope with such problems, redundant actuation has been used to enhance the performance of a position/force controller. In this paper, the novel design of a double actuator unit (DAU) composed of double actuators and a planetary gear train is proposed to provide the capability of simultaneous control of position and force as well as the improved collision safety. Since one actuator controls position and the other actuator modulates stiffness, DAU can control the position and stiffness simultaneously at the same joint. The torque exerted on the joint can be estimated without an expensive torque/force sensor. DAU is capable of detecting dynamic collision by monitoring the speed of the stiffness modulator. Upon detection of dynamic collision, DAU immediately reduces its joint stiffness according to the collision magnitude, thus providing the optimum collision safety. It is shown from various experiments that DAU can provide good performance of position tracking, force estimation and collision safety.",
"title": ""
},
{
"docid": "166b16222ecc15048972e535dbf4cb38",
"text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.",
"title": ""
},
{
"docid": "0de069da5fd8e5d36c399ef3da013320",
"text": "This paper explores the contrasting notions of \"permanance and disposability,\" \"the digital and the physical,\" and \"symbolism and function\" in the context of interaction design. Drawing from diverse streams of knowledge, we describe a novel design direction for enduring computational heirlooms based on the marriage of decentralized, trustless software and durable mobile hardware. To justify this concept, we review prior research; attempt to redefine the notion of \"material;\" propose blockchain-based software as a particular digital material to serve as a substrate for computational heirlooms; and argue for the use of mobile artifacts, informed in terms of their materials and formgiving practices by mechanical wristwatches, as its physical embodiment and functional counterpart. This integration is meant to enable mobile and ubiquitous interactive systems for the storing, experiencing, and exchanging value throughout multiple human lifetimes; showcasing the feats of computational sciences and crafts; and enabling novel user experiences.",
"title": ""
},
{
"docid": "b0a1cdf37eb1d78262ed663974a36793",
"text": "OBJECTIVE\nThe present study aimed at examining the time course and topography of oscillatory brain activity and event-related potentials (ERPs) in response to laterally presented affective pictures.\n\n\nMETHODS\nElectroencephalography was recorded from 129 electrodes in 10 healthy university students during presentation of pictures from the international affective picture system. Frequency measures and ERPs were obtained for pleasant, neutral, and unpleasant pictures.\n\n\nRESULTS\nIn accordance with previous reports, a modulation of the late positive ERP wave at parietal recording sites was found as a function of emotional arousal. Early mid gamma band activity (GBA; 30-45 Hz) at 80 ms post-stimulus was enhanced in response to aversive stimuli only, whereas the higher GBA (46-65 Hz) at 500 ms showed an enhancement of arousing, compared to neutral pictures. ERP and late gamma effects showed a pronounced right-hemisphere preponderance, but differed in terms of topographical distribution.\n\n\nCONCLUSIONS\nLate gamma activity may represent a correlate of widespread cortical networks processing different aspects of emotionally arousing visual objects. In contrast, differences between affective categories in early gamma activity might reflect fast detection of aversive stimulus features.",
"title": ""
},
{
"docid": "3c2684e27bfcceebb1ea093e60b18577",
"text": "Studies have explored the predictors of selfie-posting, but rarely investigated selfie-editing, a virtual makeover for online self-presentation. This study, based on social comparison theory, examined a psychological pathway from individual characteristics to selfie-editing behavior through social comparison. It was hypothesized that selfie-taking, public self-consciousness, social media use, and satisfaction with facial appearance would indirectly influence selfie-editing through social comparison of appearance (with friends or social media influencers/celebrities). A two-wave longitudinal online survey was conducted in South Korea among female smartphone users aged 20 to 39 (N 1⁄4 1064 at Wave 1 and 782 at Wave 2). The results revealed that frequent selfie-taking, higher levels of public self-consciousness, and more use of social media at Wave 1 were associated with social comparison with friends at Wave 1, which increased selfie-editing behavior at Wave 2. However, those three independent variables did not have indirect effects on selfie-editing at Wave 2 through social comparison with influencers/celebrities. Also, satisfaction with facial appearance had neither direct nor indirect effect on selfie-editing at Wave 2. The findings suggest that individuals engage in social comparison and resulting selfie-editing not because of their dissatisfaction with appearance, but because of the desire for more ideal online self-",
"title": ""
},
{
"docid": "415423f706491c5ec3df6a3b3bf48743",
"text": "The realm of human uniqueness steadily shrinks; reflecting this, other primates suffer from states closer to depression or anxiety than 'depressive-like' or 'anxiety-like behavior'. Nonetheless, there remain psychiatric domains unique to humans. Appreciating these continuities and discontinuities must inform the choice of neurobiological approach used in studying any animal model of psychiatric disorders. More fundamentally, the continuities reveal how aspects of psychiatric malaise run deeper than our species' history.",
"title": ""
},
{
"docid": "46a47931c51a3b5580580d27a9a6d132",
"text": "In airline service industry, it is difficult to collect data about customers' feedback by questionnaires, but Twitter provides a sound data source for them to do customer sentiment analysis. However, little research has been done in the domain of Twitter sentiment classification about airline services. In this paper, an ensemble sentiment classification strategy was applied based on Majority Vote principle of multiple classification methods, including Naive Bayes, SVM, Bayesian Network, C4.5 Decision Tree and Random Forest algorithms. In our experiments, six individual classification approaches, and the proposed ensemble approach were all trained and tested using the same dataset of 12864 tweets, in which 10 fold evaluation is used to validate the classifiers. The results show that the proposed ensemble approach outperforms these individual classifiers in this airline service Twitter dataset. Based on our observations, the ensemble approach could improve the overall accuracy in twitter sentiment classification for other services as well.",
"title": ""
},
{
"docid": "0512987d091d29681eb8ba38a1079cff",
"text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.",
"title": ""
},
{
"docid": "56b2d8ffe74108d5b757c62eb7a7d31d",
"text": "Multi-label classification is an important machine learning task wherein one assigns a subset of candidate labels to an object. In this paper, we propose a new multi-label classification method based on Conditional Bernoulli Mixtures. Our proposed method has several attractive properties: it captures label dependencies; it reduces the multi-label problem to several standard binary and multi-class problems; it subsumes the classic independent binary prediction and power-set subset prediction methods as special cases; and it exhibits accuracy and/or computational complexity advantages over existing approaches. We demonstrate two implementations of our method using logistic regressions and gradient boosted trees, together with a simple training procedure based on Expectation Maximization. We further derive an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets. Experimental results show the effectiveness of the proposed method against competitive alternatives on benchmark datasets.",
"title": ""
},
{
"docid": "1d41e6f55521cdba4fc73febd09d2eb4",
"text": "1.",
"title": ""
},
{
"docid": "86de6e4d945f0d1fa7a0b699064d7bd5",
"text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.",
"title": ""
},
{
"docid": "7170a9d4943db078998e1844ad67ae9e",
"text": "Privacy has become increasingly important to the database community which is reflected by a noteworthy increase in research papers appearing in the literature. While researchers often assume that their definition of “privacy” is universally held by all readers, this is rarely the case; so many papers addressing key challenges in this domain have actually produced results that do not consider the same problem, even when using similar vocabularies. This paper provides an explicit definition of data privacy suitable for ongoing work in data repositories such as a DBMS or for data mining. The work contributes by briefly providing the larger context for the way privacy is defined legally and legislatively but primarily provides a taxonomy capable of thinking of data privacy technologically. We then demonstrate the taxonomy’s utility by illustrating how this perspective makes it possible to understand the important contribution made by researchers to the issue of privacy. The conclusion of this paper is that privacy is indeed multifaceted so no single current research effort adequately addresses the true breadth of the issues necessary to fully understand the scope of this important issue.",
"title": ""
},
{
"docid": "c7435dedf3733e3dd2285b1b04533b1c",
"text": "Deciding whether a claim is true or false often requires a deeper understanding of the evidence supporting and contradicting the claim. However, when presented with many evidence documents, users do not necessarily read and trust them uniformly. Psychologists and other researchers have shown that users tend to follow and agree with articles and sources that hold viewpoints similar to their own, a phenomenon known as confirmation bias. This suggests that when learning about a controversial topic, human biases and viewpoints about the topic may affect what is considered “trustworthy” or credible. It is an interesting challenge to build systems that can help users overcome this bias and help them decide the truthfulness of claims. In this article, we study various factors that enable humans to acquire additional information about controversial claims in an unbiased fashion. Specifically, we designed a user study to understand how presenting evidence with contrasting viewpoints and source expertise ratings affect how users learn from the evidence documents. We find that users do not seek contrasting viewpoints by themselves, but explicitly presenting contrasting evidence helps them get a well-rounded understanding of the topic. Furthermore, explicit knowledge of the credibility of the sources and the context in which the source provides the evidence document not only affects what users read but also whether they perceive the document to be credible. Introduction",
"title": ""
},
{
"docid": "f2f7b7152de3b83cc476e38eb6265fdf",
"text": "The discrimination of textures is a critical aspect of identi\"cation in digital imagery. Texture features generated by Gabor \"lters have been increasingly considered and applied to image analysis. Here, a comprehensive classi\"cation and segmentation comparison of di!erent techniques used to produce texture features using Gabor \"lters is presented. These techniques are based on existing implementations as well as new, innovative methods. The functional characterization of the \"lters as well as feature extraction based on the raw \"lter outputs are both considered. Overall, using the Gabor \"lter magnitude response given a frequency bandwidth and spacing of one octave and orientation bandwidth and spacing of 303 augmented by a measure of the texture complexity generated preferred results. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9327ab4f9eba9a32211ddb39463271b1",
"text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.",
"title": ""
},
{
"docid": "0f59dd09af90b911688d584292e262ed",
"text": "This article is on defining and measuring of organizational culture and its impact on the organizational performance, through an analysis of existing empirical studies and models link with the organizational culture and performance. The objective of this article is to demonstrate conceptualization, measurement and examine various concepts on organization culture and performance. After analysis of wide literature, it is found that organizational culture has deep impact on the variety of organizations process, employees and its performance. This also describes the different dimensions of the culture. Research shows that if employee are committed and having the same norms and value as per organizations have, can increase the performance toward achieving the overall organization goals. Balance Scorecard is suggested tool to measure the performance in the performance management system. More research can be done in this area to understand the nature and ability of the culture in manipulating performance of the organization. Managers and leaders are recommended to develop the strong culture in the organization to improve the overall performance of the employees and organization.",
"title": ""
},
{
"docid": "db8c9f9ba2c0bfca3a3172c915c86c1f",
"text": "In this brief, the output reachable estimation and safety verification problems for multilayer perceptron (MLP) neural networks are addressed. First, a conception called maximum sensitivity is introduced, and for a class of MLPs whose activation functions are monotonic functions, the maximum sensitivity can be computed via solving convex optimization problems. Then, using a simulation-based method, the output reachable set estimation problem for neural networks is formulated into a chain of optimization problems. Finally, an automated safety verification is developed based on the output reachable set estimation result. An application to the safety verification for a robotic arm model with two joints is presented to show the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "a671673f330bd2b1ec14aaca9f75981a",
"text": "The aim of this study was to contrast the validity of two opposing explanatory hypotheses about the effect of online communication on adolescents' well-being. The displacement hypothesis predicts that online communication reduces adolescents' well-being because it displaces time spent with existing friends, thereby reducing the quality of these friendships. In contrast, the stimulation hypothesis states that online communication stimulates well-being via its positive effect on time spent with existing friends and the quality of these friendships. We conducted an online survey among 1,210 Dutch teenagers between 10 and 17 years of age. Using mediation analyses, we found support for the stimulation hypothesis but not for the displacement hypothesis. We also found a moderating effect of type of online communication on adolescents' well-being: Instant messaging, which was mostly used to communicate with existing friends, positively predicted well-being via the mediating variables (a) time spent with existing friends and (b) the quality of these friendships. Chat in a public chatroom, which was relatively often used to talk with strangers, had no effect on adolescents' wellbeing via the mediating variables.",
"title": ""
}
] |
scidocsrr
|
2a8be2c15aa2ccd0c22908c8e305952e
|
Whoo.ly: facilitating information seeking for hyperlocal communities using social media
|
[
{
"docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4",
"text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.",
"title": ""
},
{
"docid": "81387b0f93b68e8bd6a56a4fd81477e9",
"text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "3d8fb085a0470b2c06336642436e9523",
"text": "The recent changes in climate have increased the importance of environmental monitoring, making it a topical and highly active research area. This field is based on remote sensing and on wireless sensor networks for gathering data about the environment. Recent advancements, such as the vision of the Internet of Things (IoT), the cloud computing model, and cyber-physical systems, provide support for the transmission and management of huge amounts of data regarding the trends observed in environmental parameters. In this context, the current work presents three different IoT-based wireless sensors for environmental and ambient monitoring: one employing User Datagram Protocol (UDP)-based Wi-Fi communication, one communicating through Wi-Fi and Hypertext Transfer Protocol (HTTP), and a third one using Bluetooth Smart. All of the presented systems provide the possibility of recording data at remote locations and of visualizing them from every device with an Internet connection, enabling the monitoring of geographically large areas. The development details of these systems are described, along with the major differences and similarities between them. The feasibility of the three developed systems for implementing monitoring applications, taking into account their energy autonomy, ease of use, solution complexity, and Internet connectivity facility, was analyzed, and revealed that they make good candidates for IoT-based solutions.",
"title": ""
},
{
"docid": "01ccb35abf3eed71191dc8638e58f257",
"text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.",
"title": ""
},
{
"docid": "13da78e7868baf04fce64ff02690b0f0",
"text": "Industrial IoT (IIoT) refers to the application of IoT in industrial management to improve the overall operational efficiency. With IIoT that accelerates the industrial automation process by enrolling thousands of IoT devices, strong security foundations are to be deployed befitting the distributed connectivity and constrained functionalities of the IoT devices. Recent years witnessed severe attacks exploiting the vulnerabilities in the devices of IIoT networks. Moreover, attackers can use the relations among the vulnerabilities to penetrate deep into the network. This paper addresses the security issues in IIoT network because of the vulnerabilities existing in its devices. As graphs are efficient in representing relations among entities, we propose a graphical model representing the vulnerability relations in the IIoT network. This helps to formulate the security issues in the network as graph-theoretic problems. The proposed model acts as a security framework for the risk assessment of the network. Furthermore, we propose a set of risk mitigation strategies to improve the overall security of the network. The strategies include detection and removal of the attack paths with high risk and low hop-length. We also discuss a method to identify the strongly connected vulnerabilities referred as hot-spots. A use-case is discussed and various security parameters are evaluated. The simulation results with graphs of different sizes and structures are presented for the performance evaluation of the proposed techniques against the changing dynamics of the IIoT networks.",
"title": ""
},
{
"docid": "8709706ffafdadfc2fb9210794dfa782",
"text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.",
"title": ""
},
{
"docid": "d34cc5c09e882c167b3ff273f5c52159",
"text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013",
"title": ""
},
{
"docid": "f0bbe4e6d61a808588153c6b5fc843aa",
"text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.",
"title": ""
},
{
"docid": "0024e332c0ce1adee2d29a0d2b4b6408",
"text": "Vehicles equipped with intelligent systems designed to prevent accidents, such as collision warning systems (CWSs) or lane-keeping assistance (LKA), are now on the market. The next step in reducing road accidents is to coordinate such vehicles in advance not only to avoid collisions but to improve traffic flow as well. To this end, vehicle-to-infrastructure (V2I) communications are essential to properly manage traffic situations. This paper describes the AUTOPIA approach toward an intelligent traffic management system based on V2I communications. A fuzzy-based control algorithm that takes into account each vehicle's safe and comfortable distance and speed adjustment for collision avoidance and better traffic flow has been developed. The proposed solution was validated by an IEEE-802.11p-based communications study. The entire system showed good performance in testing in real-world scenarios, first by computer simulation and then with real vehicles.",
"title": ""
},
{
"docid": "67fe4b931c2495c6833da493707e58d1",
"text": "Alan N. Steinberg Technical Director, Data Fusion ERIM International, Inc. 1101 Wilson Blvd Arlington, VA 22209 (703)528-5250 x4109 steinberg@erim-int.com Christopher L. Bowman Data Fusion and Neural Networks 1643 Hemlock Way Broomfield, CO 80020 (303)469-9828 cbowman@indra.com Franklin E. White Director, Program Development SPAWAR Systems Center San Diego, CA 92152 Chair, Data Fusion Group (619) 553-4036 whitefe@spawar.navy.mil",
"title": ""
},
{
"docid": "f7f5a0bedb0cae6f2d9fda528dfffcb9",
"text": "This paper focuses on the recognition of Activities of Daily Living (ADL) applying pattern recognition techniques to the data acquired by the accelerometer available in the mobile devices. The recognition of ADL is composed by several stages, including data acquisition, data processing, and artificial intelligence methods. The artificial intelligence methods used are related to pattern recognition, and this study focuses on the use of Artificial Neural Networks (ANN). The data processing includes data cleaning, and the feature extraction techniques to define the inputs for the ANN. Due to the low processing power and memory of the mobile devices, they should be mainly used to acquire the data, applying an ANN previously trained for the identification of the ADL. The main purpose of this paper is to present a new method implemented with ANN for the identification of a defined set of ADL with a reliable accuracy. This paper also presents a comparison of different types of ANN in order to choose the type for the implementation of the final method. Results of this research probes that the best accuracies are achieved with Deep Learning techniques with an accuracy higher than 80%.",
"title": ""
},
{
"docid": "66876eb3710afda075b62b915a2e6032",
"text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.",
"title": ""
},
{
"docid": "b08023089abd684d26fabefb038cc9fa",
"text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
},
{
"docid": "70c33dda7076e182ab2440e1f37186f7",
"text": "A loss of subchannel orthogonality due to timevariant multipath channels in orthogonal frequency-division multiplexing (OFDM) systems leads to interchannel interference (ICI) which increases the error floor in proportion to the Doppler frequency. In this paper, a simple frequency-domain equalization technique which can compensate for the effect of ICI in a multipath fading channel is proposed. In this technique, the equalization of the received OFDM signal is achieved by using the assumption that the channel impulse response (CIR) varies in a linear fashion during a block period and by compensating for the ICI terms that significantly affect the bit-error rate (BER) performance.",
"title": ""
},
{
"docid": "f1f574734a9a3ba579067e3ef7ce9649",
"text": "This paper presents an integrated control approach for autonomous driving comprising a corridor path planner that determines constraints on vehicle position, and a linear time-varying model predictive controller combining path planning and tracking in a road-aligned coordinate frame. The capabilities of the approach are illustrated in obstacle-free curved road-profile tracking, in an application coupling adaptive cruise control (ACC) with obstacle avoidance (OA), and in a typical driving maneuver on highways. The vehicle is modeled as a nonlinear dynamic bicycle model with throttle, brake pedal position, and steering angle as control inputs. Proximity measurements are assumed to be available within a given range field surrounding the vehicle. The proposed general feedback control architecture includes an estimator design for fusion of database information (maps), exteroceptive as well as proprioceptive measurements, a geometric corridor planner based on graph theory for the avoidance of multiple, potentially dynamically moving objects, and a spatial-based predictive controller. Switching rules for transitioning between four different driving modes, i.e., ACC, OA, obstacle-free road tracking (RT), and controlled braking (Brake), are discussed. The proposed method is evaluated on test cases, including curved and highway two-lane road tracks with static as well as moving obstacles.",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "22654d2ed4c921c7bceb22ce9f9dc892",
"text": "xv",
"title": ""
},
{
"docid": "8ddfa95b1300959ab5e84a0b66dac593",
"text": "Do you need the book of Network Science and Cybersecurity pdf with ISBN of 9781461475965? You will be glad to know that right now Network Science and Cybersecurity pdf is available on our book collections. This Network Science and Cybersecurity comes PDF and EPUB document format. If you want to get Network Science and Cybersecurity pdf eBook copy, you can download the book copy here. The Network Science and Cybersecurity we think have quite excellent writing style that make it easy to comprehend.",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "0683dbfa548d90b1fcbd3d793d194e6c",
"text": "Ayurvedic medicine is an ancient Indian form of healing. It is gaining popularity as part of the growing interest in New Age spirituality and in complementary and alternative medicine (CAM). There is no cure for Asthma as per the Conventional Medical Science. Ayurvedic medicines can be a potential and effective alternative for the treatment against the bronchial asthma. Ayurvedic medicines are used for the treatment of diseases globally. The present study was a review on the management of Tamaka-Shwasa based on Ayurvedic drugs including the respiratory tonics and naturally occurring bronchodilator and immune-modulators. This study result concluded that a systematic combination of herbal and allopathic medicines is required for management of asthma.",
"title": ""
}
] |
scidocsrr
|
2f8fce164cb4453cf5498b7b0275792f
|
Accelerating Convolutional Neural Networks for Mobile Applications
|
[
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "26dac00bc328dc9c8065ff105d1f8233",
"text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.",
"title": ""
}
] |
[
{
"docid": "69f597aac301a492892354dd593a4355",
"text": "The influence of user generated content on e-commerce websites and social media has been addressed in both practical and theoretical fields. Since most previous studies focus on either electronic word of mouth (eWOM) from e-commerce websites (EC-eWOM) or social media (SM-eWOM), little is known about the adoption process when consumers are presented EC-eWOM and SM-eWOM simultaneously. We focus on this problem by considering their adoption as an interactive process. It clarifies the mechanism of consumer’s adoption for those from the perspective of cognitive cost theory. A conceptual model is proposed about the relationship between the adoptions of the two types of eWOM. The empirical analysis shows that EC-eWOM’s usefulness and credibility positively influence the adoption of EC-eWOM, but negatively influence that of SM-eWOM. EC-eWOM adoption negatively impacts SM-eWOM adoption, and mediates the relationship between usefulness, credibility and SM-eWOM adoption. The moderating effects of consumers’ cognitive level and degree of involvement are also discussed. This paper further explains the adoption of the two types of eWOM based on the cognitive cost theory and enriches the theoretical research about eWOM in the context of social commerce. Implications for practice, as well as suggestions for future research, are also discussed. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d1a8e3a67181cd43429a98dc38affd35",
"text": "Deep belief nets (DBNs) with multiple artificial neural networks (ANNs) have attracted many researchers recently. In this paper, we propose to compose restricted Boltzmann machine (RBM) and multi-layer perceptron (MLP) as a DBN to predict chaotic time series data, such as the Lorenz chaos and the Henon map. Experiment results showed that in the sense of prediction precision, the novel DBN performed better than the conventional DBN with RBMs.",
"title": ""
},
{
"docid": "a36944b193ca1b2423010017b08d5d2c",
"text": "Hand washing is a critical activity in preventing the spread of infection in health-care environments and food preparation areas. Several guidelines recommended a hand washing protocol consisting of six steps that ensure that all areas of the hands are thoroughly cleaned. In this paper, we describe a novel approach that uses a computer vision system to measure the user’s hands motions to ensure that the hand washing guidelines are followed. A hand washing quality assessment system needs to know if the hands are joined or separated and it has to be robust to different lighting conditions, occlusions, reflections and changes in the color of the sink surface. This work presents three main contributions: a description of a system which delivers robust hands segmentation using a combination of color and motion analysis, a single multi-modal particle filter (PF) in combination with a k-means-based clustering technique to track both hands/arms, and the implementation of a multi-class classification of hand gestures using a support vector machine ensemble. PF performance is discussed and compared with a standard Kalman filter estimator. Finally, the global performance of the system is analyzed and compared with human performance, showing an accuracy close to that of human experts.",
"title": ""
},
{
"docid": "d10c17324f8f6d4523964f10bc689d8e",
"text": "This article studied a novel Log-Periodic Dipole Antenna (LPDA) with distributed inductive load for size reduction. By adding a short circuit stub at top of the each element, the dimensions of the LPDA are reduced by nearly 50% compared to the conventional one. The impedance bandwidth of the presented antenna is nearly 122% (54~223MHz) (S11<;10dB), and this antenna is very suited for BROADCAST and TV applications.",
"title": ""
},
{
"docid": "cf1720877ddc4400bdce2a149b5ec8b4",
"text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.",
"title": ""
},
{
"docid": "173d791e05859ec4cc28b9649c414c62",
"text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer",
"title": ""
},
{
"docid": "8069999c95b31e8c847091f72b694af7",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "380fdee23bebf16b05ce7caebd6edac4",
"text": "Automatic detection of emotions has been evaluated using standard Mel-frequency Cepstral Coefficients, MFCCs, and a variant, MFCC-low, calculated between 20 and 300 Hz, in order to model pitch. Also plain pitch features have been used. These acoustic features have all been modeled by Gaussian mixture models, GMMs, on the frame level. The method has been tested on two different corpora and languages; Swedish voice controlled telephone services and English meetings. The results indicate that using GMMs on the frame level is a feasible technique for emotion classification. The two MFCC methods have similar performance, and MFCC-low outperforms the pitch features. Combining the three classifiers significantly improves performance.",
"title": ""
},
{
"docid": "9e0f3f1ec7b54c5475a0448da45e4463",
"text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.",
"title": ""
},
{
"docid": "2eba831751ae88cfb69b7c4463df438a",
"text": "ÐSoftware engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews. Index TermsÐInspections, walkthroughs, technical reviews, defects, defect detection, groups, group process, group size, expertise, reading, training, behavioral research, theory, research program.",
"title": ""
},
{
"docid": "d71ac31768bf1adb80a8011360225443",
"text": "Person re-identification has recently attracted a lot of attention in the computer vision community. This is in part due to the challenging nature of matching people across cameras with different viewpoints and lighting conditions, as well as across human pose variations. The literature has since devised several approaches to tackle these challenges, but the vast majority of the work has been concerned with appearance-based methods. We propose an approach that goes beyond appearance by integrating a semantic aspect into the model. We jointly learn a discriminative projection to a joint appearance-attribute subspace, effectively leveraging the interaction between attributes and appearance for matching. Our experimental results support our model and demonstrate the performance gain yielded by coupling both tasks. Our results outperform several state-of-the-art methods on VIPeR, a standard re-identification dataset. Finally, we report similar results on a new large-scale dataset we collected and labeled for our task.",
"title": ""
},
{
"docid": "15cd1e8dba20cbcfd10a1f1b926a5f63",
"text": "Decision analysis can be defined as a set of systematic procedures for analysing complex decision problems. Differences between the desired and the actual state of real world geographical system is a spatial decision problem, which can be approached systematically by means of multi-criteria decision making. Many real-world spatially related problems give rise to geographical information system based multi-criteria decision making. Geographical information systems and multi-criteria decision making have developed largely independently, but a trend towards the exploration of their synergies is now emerging. This paper discusses the synergistic role of multi-criteria decisions in geographical information systems and the use of geographical information systems in multi-attribute decision analysis. An example is provided of analysis of land use suitability by use of either weighted linear combination methods or ordered weighting averages.",
"title": ""
},
{
"docid": "793435bef5fd93d7f58b52269fcbb839",
"text": "Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.",
"title": ""
},
{
"docid": "a2247241882074e5d27a3c3bbbde5936",
"text": "As scientific computation continues to scale, it is crucial to use floating-point arithmetic processors as efficiently as possible. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set will result in inaccurate results. Thus, developers must balance speed and accuracy when choosing the floating-point precision of their subroutines and data structures. I am investigating techniques to help developers learn about the runtime floating-point behavior of their programs, and to help them make decisions concerning the choice of precision in implementation. I propose to develop methods that will generate floating-point precision configurations, automatically testing and validating them using binary instrumentation. The goal is ultimately to make a recommendation to the developer regarding which parts of the program can be reduced to single-precision. The central thesis is that automated analysis techniques can make recommendations regarding the precision levels that each part of a computer program must use to maintain overall accuracy, with the goal of improving performance on scientific codes.",
"title": ""
},
{
"docid": "0059c0b90c2ab8729ca98569be74a3dc",
"text": "This paper describes the STAC resource, a corpus of multi-party chats annotated for discourse structure in the style of SDRT (Asher and Lascarides, 2003; Lascarides and Asher, 2009). The main goal of the STAC project is to study the discourse structure of multi-party dialogues in order to understand the linguistic strategies adopted by interlocutors to achieve their conversational goals, especially when these goals are opposed. The STAC corpus is not only a rich source of data on strategic conversation, but also the first corpus that we are aware of that provides full discourse structures for multi-party dialogues. It has other remarkable features that make it an interesting resource for other topics: interleaved threads, creative language, and interactions between linguistic and extra-linguistic contexts.",
"title": ""
},
{
"docid": "699e0a10b29fad7d259cd781457462c4",
"text": "Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual metaphor to depict software structure changes with techniques that emphasize both following unchanged code as well as detecting and highlighting important events such as code drift, splits, merges, insertions and deletions. The method is illustrated with the analysis of a real-world C++ code system.",
"title": ""
},
{
"docid": "4e8f7fdba06ae7973e3d25cf35399aaf",
"text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.",
"title": ""
},
{
"docid": "09a3836f9dd429b6820daf3d2c9b2944",
"text": "Students attendance in the classroom is very important task and if taken manually wastes a lot of time. There are many automatic methods available for this purpose i.e. biometric attendance. All these methods also waste time because students have to make a queue to touch their thumb on the scanning device. This work describes the efficient algorithm that automatically marks the attendance without human intervention. This attendance is recorded by using a camera attached in front of classroom that is continuously capturing images of students, detect the faces in images and compare the detected faces with the database and mark the attendance. The paper review the related work in the field of attendance system then describes the system architecture, software algorithm and results.",
"title": ""
},
{
"docid": "e0092f7964604f7adbe9f010bbac4871",
"text": "In the last decade, Web 2.0 services such as blogs, tweets, forums, chats, email etc. have been widely used as communication media, with very good results. Sharing knowledge is an important part of learning and enhancing skills. Furthermore, emotions may affect decisionmaking and individual behavior. Bitcoin, a decentralized electronic currency system, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work, we investigated if the spread of the Bitcoin’s price is related to the volumes of tweets or Web Search media results. We compared trends of price with Google Trends data, volume of tweets and particularly with those that express a positive sentiment. We found significant cross correlation values, especially between Bitcoin price and Google Trends data, arguing our initial idea based on studies about trends in stock and goods market.",
"title": ""
},
{
"docid": "c16ff028e77459867eed4c2b9c1f44c6",
"text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"title": ""
}
] |
scidocsrr
|
eed761de29abe175298b5f6dfb097529
|
Deep Feature Learning for Graphs
|
[
{
"docid": "a917a0ed4f9082766aeef29cb82eeb27",
"text": "Roles represent node-level connectivity patterns such as star-center, star-edge nodes, near-cliques or nodes that act as bridges to different regions of the graph. Intuitively, two nodes belong to the same role if they are structurally similar. Roles have been mainly of interest to sociologists, but more recently, roles have become increasingly useful in other domains. Traditionally, the notion of roles were defined based on graph equivalences such as structural, regular, and stochastic equivalences. We briefly revisit these early notions and instead propose a more general formulation of roles based on the similarity of a feature representation (in contrast to the graph representation). This leads us to propose a taxonomy of three general classes of techniques for discovering roles that includes (i) graph-based roles, (ii) feature-based roles, and (iii) hybrid roles. We also propose a flexible framework for discovering roles using the notion of similarity on a feature-based representation. The framework consists of two fundamental components: (a) role feature construction and (b) role assignment using the learned feature representation. We discuss the different possibilities for discovering feature-based roles and the tradeoffs of the many techniques for computing them. Finally, we discuss potential applications and future directions and challenges.",
"title": ""
},
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
}
] |
[
{
"docid": "16bd1ca1e6320e0875dede14e7a2cc7d",
"text": "Software process is viewed as an important factor to deliver high quality products. Although there have been several Software Process Models proposed, the software processes are still short of formal descriptions. This paper presents an ontology-based approach to express software processes at the conceptual level. An OWL-based ontology for software processes, called SPO (Software Process Ontology), is designed, and it is extended to generate ontologies for specific process models, such as CMMI and ISO/IEC 15504. A prototype of a web-based process assessment tool based on SPO is developed to illustrate the advantages of this approach. Finally, some further research in this direction is outlined.",
"title": ""
},
{
"docid": "02aed3ad7a5a4a70cfb3f9f4923e3a34",
"text": "Social media platforms such as Facebook are now a ubiquitous part of everyday life for many people. New media scholars posit that the participatory culture encouraged by social media gives rise to new forms of literacy skills that are vital to learning. However, there have been few attempts to use analytics to understand the new media literacy skills that may be embedded in an individual's participation in social media. In this paper, I collect raw activity data that was shared by an exploratory sample of Facebook users. I then utilize factor analysis and regression models to show how (a) Facebook members' online activity coalesce into distinct categories of social media behavior and (b) how these participatory behaviors correlate with and predict measures of new media literacy skills. The study demonstrates the use of analytics to understand the literacies embedded in people's social media activity. The implications speak to the potential of social learning analytics to identify and predict new media literacy skills from data streams in social media platforms.",
"title": ""
},
{
"docid": "be6ed89571fbd1b0720f00d0338d514b",
"text": "We perform sensitivity analyses to assess the impact of missing data on the structural properties of social networks. The social network is conceived of as being generated by a bipartite graph, in which actors are linked together via multiple interaction contexts or affiliations. We discuss three principal missing data mechanisms: network boundary specification (non-inclusion of actors or affiliations), survey non-response, and censoring by vertex degree (fixed choice design), examining their impact on the scientific collaboration network from the Los Alamos E-print Archive as well as random bipartite graphs. The simulation results show that network boundary specification and fixed choice designs can dramatically alter estimates of network-level statistics. The observed clustering and assortativity coefficients are overestimated via omission of affiliations or fixed choice thereof, and underestimated via actor non-response, which results in inflated measurement error. We also find that social networks with multiple interaction contexts may have certain interesting properties due to the presence of overlapping cliques. In particular, assortativity by degree does not necessarily improve network robustness to random omission of nodes as predicted by current theory.",
"title": ""
},
{
"docid": "d4c55e8e70392b7f7a9bcfe325b7a0da",
"text": "BACKGROUND\nFollicular mucinosis coexisting with lymphoproliferative disorders has been thoroughly debated. However, it has been rarely reported in association with inflammatory disorders.\n\n\nMETHODS\nThirteen cases have been retrieved, and those with cutaneous lymphoma or alopecia mucinosa were excluded.\n\n\nRESULTS\nFollicular mucinosis was found in the setting of squamous cell carcinoma, seborrheic keratosis, simple prurigo, acne vulgaris, dextrometorphan-induced phototoxicity, polymorphous light eruption (2 cases), insect bite (2 cases), tick bite, discoid lupus erythematosus, drug-related vasculitis, and demodecidosis. Unexpectedly, our observations revealed a preponderating accumulation of mucin related to photo-exposed areas, sun-associated dermatoses, and histopathologic solar elastosis. The amount of mucin filling the follicles apparently correlated with the intensity of perifollicular inflammatory infiltrate, which was present in all cases. The concurrence of dermal interstitial mucin was found in 7 cases (54%).\n\n\nCONCLUSIONS\nThe concurrence of interstitial dermal mucinosis or the potential role of both ultraviolet radiation and the perifollicular inflammatory infiltrates in its pathogenesis deserves further investigations. Precise recognition and understanding of this distinctive, reactive histological pattern may prevent our patients from unnecessary diagnostic and therapeutic strategies.",
"title": ""
},
{
"docid": "87a8009147398908c79c927654f2039d",
"text": "This paper presents a new adaptive binarization technique for degraded hand-held camera-captured document images. The state-of-the-art locally adaptive binarization methods are sensitive to the values of free parameter. This problem is more critical when binarizing degraded camera-captured document images because of distortions like non-uniform illumination, bad shading, blurring, smearing and low resolution. We demonstrate in this paper that local binarization methods are not only sensitive to the selection of free parameters values (either found manually or automatically), but also sensitive to the constant free parameters values for all pixels of a document image. Some range of values of free parameters are better for foreground regions and some other range of values are better for background regions. For overcoming this problem, we present an adaptation of a state-of-the-art local binarization method such that two different set of free parameters values are used for foreground and background regions respectively. We present the use of ridges detection for rough estimation of foreground regions in a document image. This information is then used to calculate appropriate threshold using different set of free parameters values for the foreground and background regions respectively. The evaluation of the method using an OCR-based measure and a pixel-based measure show that our method achieves better performance as compared to state-of-the-art global and local binarization methods.",
"title": ""
},
{
"docid": "ac7dd65b4f09aba635d399a2bd86ff99",
"text": "We study the role of the second language in bilingual word embeddings in monolingual semantic evaluation tasks. We find strongly and weakly positive correlations between down-stream task performance and second language similarity to the target language. Additionally, we show how bilingual word embeddings can be employed for the task of semantic language classification and that joint semantic spaces vary in meaningful ways across second languages. Our results support the hypothesis that semantic language similarity is influenced by both structural similarity as well as geography/contact.",
"title": ""
},
{
"docid": "7962440362fd5b955f83784a0068f8b5",
"text": "Data warehousing is one of the major research topics of appliedside database investigators. Most of the work to date has focused on building large centralized systems that are integrated repositories founded on pre-existing systems upon which all corporate-wide data are based. Unfortunately, this approach is very expensive and tends to ignore the advantages realized during the past decade in the area of distribution and support for data localization in a geographically dispersed corporate structure. This research investigates building distributed data warehouses with particular emphasis placed on distribution design for the data warehouse environment. The article provides an architectural model for a distributed data warehouse, the formal definition of the relational data model for data warehouse and a methodology for distributed data warehouse design along with a “horizontal” fragmentation algorithm for the fact relation.",
"title": ""
},
{
"docid": "c9748c67c2ab17cfead44fe3b486883d",
"text": "Entropy coding is an integral part of most data compression systems. Huffman coding (HC) and arithmetic coding (AC) are two of the most widely used coding methods. HC can process a large symbol alphabet at each step allowing for fast encoding and decoding. However, HC typically provides suboptimal data rates due to its inherent approximation of symbol probabilities to powers of 1 over 2. In contrast, AC uses nearly accurate symbol probabilities, hence generally providing better compression ratios. However, AC relies on relatively slow arithmetic operations making the implementation computationally demanding. In this paper we discuss asymmetric numeral systems (ANS) as a new approach to entropy coding. While maintaining theoretical connections with AC, the proposed ANS-based coding can be implemented with much less computational complexity. While AC operates on a state defined by two numbers specifying a range, an ANS-based coder operates on a state defined by a single natural number such that the x ∈ ℕ state contains ≈ log2(x) bits of information. This property allows to have the entire behavior for a large alphabet summarized in the form of a relatively small table (e.g. a few kilobytes for a 256 size alphabet). The proposed approach can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC. Additionally, ANS can simultaneously encrypt a message encoded this way. Experimental results demonstrate effectiveness of the proposed entropy coder.",
"title": ""
},
{
"docid": "84bc3c35868aa02778eef4350153c092",
"text": "Google’s PageRank method was developed to evaluate the importance of web-pages via their link structure. The mathematics of PageRank, however, are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It’s even used for systems analysis of road networks, as well as biology, chemistry, neuroscience, and physics. We’ll see the mathematics and ideas that unite these diverse applications.",
"title": ""
},
{
"docid": "53e9a5a6ce764ca0d3399d7097c3a71b",
"text": "Machine Learning is a field of research aimed at constructing intelligent machines that gain and improve their skills by learning and adaptation. As such, Machine Learning research addresses several classes of learning problems, including for instance, supervised and unsupervised learning. Arguably, the most ubiquitous and realistic class of learning problems, faced by both living creatures and artificial agents, is known as Reinforcement Learning. Reinforcement Learning problems are characterized by a long-term interaction between the learning agent and a dynamic, unfamiliar, uncertain, possibly even hostile environment. Mathematically, this interaction is modeled as a Markov Decision Process (MDP). Probably the most significant contribution of this thesis is in the introduction of a new class of Reinforcement Learning algorithms, which leverage the power of a statistical set of tools known as Gaussian Processes. This new approach to Reinforcement Learning offers viable solutions to some of the major limitations of current Reinforcement Learning methods, such as the lack of confidence intervals for performance predictions, and the difficulty of appropriately reconciling exploration with exploitation. Analysis of these algorithms and their relationship with existing methods also provides us with new insights into the assumptions underlying some of the most popular Reinforcement Learning algorithms to date.",
"title": ""
},
{
"docid": "008ad9d12f1a8451f46be59eeef5bf0b",
"text": "0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.05.070 ⇑ Corresponding author. Tel.: +34 953 212898; fax: E-mail address: msaleh@ujaen.es (M. Rushdi Saleh 1 http://www.amazon.com. 2 http://www.epinions.com. 3 http://www.imdb.com. Recently, opinion mining is receiving more attention due to the abundance of forums, blogs, e-commerce web sites, news reports and additional web sources where people tend to express their opinions. Opinion mining is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. In this paper we explore this new research area applying Support Vector Machines (SVM) for testing different domains of data sets and using several weighting schemes. We have accomplished experiments with different features on three corpora. Two of them have already been used in several works. The last one has been built from Amazon.com specifically for this paper in order to prove the feasibility of the SVM for different domains. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4cc4c8fd07f30b5546be2376c1767c19",
"text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.",
"title": ""
},
{
"docid": "2804384964bc8996e6574bdf67ed9cb5",
"text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.",
"title": ""
},
{
"docid": "5010761051983f5de1f18a11d477f185",
"text": "Financial forecasting has been challenging problem due to its high non-linearity and high volatility. An Artificial Neural Network (ANN) can model flexible linear or non-linear relationship among variables. ANN can be configured to produce desired set of output based on set of given input. In this paper we attempt at analyzing the usefulness of artificial neural network for forecasting financial data series with use of different algorithms such as backpropagation, radial basis function etc. With their ability of adapting non-linear and chaotic patterns, ANN is the current technique being used which offers the ability of predicting financial data more accurately. \"A x-y-1 network topology is adopted because of x input variables in which variable y was determined by the number of hidden neurons during network selection with single output.\" Both x and y were changed.",
"title": ""
},
{
"docid": "e92831c27bc5a65ca3b45a4f3671016c",
"text": "A library of 600 taxonomically diverse Panamanian plant extracts was screened for DPPH scavenging and UV-B protective activities, and the methanolic extracts of Mosquitoxylum jamaicense, Combretum cacoucia, and Casearia commersionia were submitted to HPLC-based activity profiling. The compounds located in the active time windows were isolated and identified as gallic acid derivatives and flavonoids. Gallic acid methyl ester (3) and digallic acid derivatives (2, 6) showed the highest DPPH scavenging activity (<10 μg/mL), while protocatechuic acid (7) and isoquercitrin (10) exhibited the highest UV-B protective properties.",
"title": ""
},
{
"docid": "5213aa65c5a291f0839046607dcf5f6c",
"text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.",
"title": ""
},
{
"docid": "93a2d7072ab88ad77c23f7c1dc5a129c",
"text": "In recent decades, the need for efficient and effective image search from large databases has increased. In this paper, we present a novel shape matching framework based on structures common to similar shapes. After representing shapes as medial axis graphs, in which nodes show skeleton points and edges connect nearby points, we determine the critical nodes connecting or representing a shape’s different parts. By using the shortest path distance from each skeleton (node) to each of the critical nodes, we effectively retrieve shapes similar to a given query through a transportation-based distance function. To improve the effectiveness of the proposed approach, we employ a unified framework that takes advantage of the feature representation of the proposed algorithm and the classification capability of a supervised machine learning algorithm. A set of shape retrieval experiments including a comparison with several well-known approaches demonstrate the proposed algorithm’s efficacy and perturbation experiments show its robustness.",
"title": ""
},
{
"docid": "1a41bd991241ed1751beda2362465a0d",
"text": "Over the last decade, Convolutional Neural Networks (CNN) saw a tremendous surge in performance. However, understanding what a network has learned still proves to be a challenging task. To remedy this unsatisfactory situation, a number of groups have recently proposed different methods to visualize the learned models. In this work we suggest a general taxonomy to classify and compare these methods, subdividing the literature into three main categories and providing researchers with a terminology to base their works on. Furthermore, we introduce the FeatureVis library for MatConvNet: an extendable, easy to use open source library for visualizing CNNs. It contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers, as well as for the analysis of why a network might fail for certain examples.",
"title": ""
},
{
"docid": "7098df58dc9f86c9b462610f03bd97a6",
"text": "The advent of the computer and computer science, and in particular virtual reality, offers new experiment possibilities with numerical simulations and introduces a new type of investigation for the complex systems study : the in virtuo experiment. This work lies on the framework of multi-agent systems. We propose a generic model for systems biology based on reification of the interactions, on a concept of organization and on a multi-model approach. By ``reification'' we understand that interactions are considered as autonomous agents. The aim has been to combine the systemic paradigm and the virtual reality to provide an application able to collect, simulate, experiment and understand the knowledge owned by different biologists working around an interdisciplinary subject. In that case, we have been focused on the urticaria disease understanding. The method permits to integrate different natures of model. We have modeled biochemical reactions, molecular diffusion, cell organisations and mechanical interactions. It also permits to embed different expert system modeling methods like fuzzy cognitive maps.",
"title": ""
},
{
"docid": "33cf6c26de09c7772a529905d9fa6b5c",
"text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.",
"title": ""
}
] |
scidocsrr
|
a55072ca1513b4d10a0da94bb461ce10
|
Brain Tumor Detection Using Image Processing
|
[
{
"docid": "4bbb2191088155c823bc152fce0dec89",
"text": "Image Segmentation is an important and challenging factor in the field of medical sciences. It is widely used for the detection of tumours. This paper deals with detection of brain tumour from MR images of the brain. The brain is the anterior most part of the nervous system. Tumour is a rapid uncontrolled growth of cells. Magnetic Resonance Imaging (MRI) is the device required to diagnose brain tumour. The normal MR images are not that suitable for fine analysis, so segmentation is an important process required for efficiently analyzing the tumour images. Clustering is suitable for biomedical image segmentation as it uses unsupervised learning. This paper work uses K-Means clustering where the detected tumour shows some abnormality which is then rectified by the use of morphological operators along with basic image processing techniques to meet the goal of separating the tumour cells from the normal cells.",
"title": ""
},
{
"docid": "1bdfcf7f162bfc8c8c51a153fd4ea437",
"text": "In this paper, modified image segmentation techniques were applied on MRI scan images in order to detect brain tumors. Also in this paper, a modified Probabilistic Neural Network (PNN) model that is based on learning vector quantization (LVQ) with image and data analysis and manipulation techniques is proposed to carry out an automated brain tumor classification using MRI-scans. The assessment of the modified PNN classifier performance is measured in terms of the training performance, classification accuracies and computational time. The simulation results showed that the modified PNN gives rapid and accurate classification compared with the image processing and published conventional PNN techniques. Simulation results also showed that the proposed system out performs the corresponding PNN system presented in [30], and successfully handle the process of brain tumor classification in MRI image with 100% accuracy when the spread value is equal to 1. These results also claim that the proposed LVQ-based PNN system decreases the processing time to approximately 79% compared with the conventional PNN which makes it very promising in the field of in-vivo brain tumor detection and identification. Keywords— Probabilistic Neural Network, Edge detection, image segmentation, brain tumor detection and identification",
"title": ""
}
] |
[
{
"docid": "504776b83a292b320aaf0d0b02947d02",
"text": "The combination of unique single nucleotide polymorphisms in the CCR5 regulatory and in the CCR2 and CCR5 coding regions, defined nine CCR5 human haplogroups (HH): HHA-HHE, HHF*1, HHF*2, HHG*1, and HHG*2. Here we examined the distribution of CCR5 HH and their association with HIV infection and disease progression in 36 HIV-seronegative and 76 HIV-seropositive whites from North America and Spain [28 rapid progressors (RP) and 48 slow progressors (SP)]. Although analyses revealed that HHE frequencies were similar between HIV-seronegative and HIV-seropositive groups (25.0% vs. 32.2%, p > 0.05), HHE frequency in RP was significantly higher than that in SP (48.2% vs. 22.9%, p = 0.002). Survival analysis also showed that HHE heterozygous and homozygous were associated with an accelerated CD4 cell count decline to less than 200 cells/microL (adjusted RH 2.44, p = 0.045; adjusted RH = 3.12, p = 0.037, respectively). These data provide further evidence that CCR5 human haplogroups influence HIV-1 disease progression in HIV-infected persons.",
"title": ""
},
{
"docid": "3296ab591724b59a808ce2f43d9320ef",
"text": "We present a novel method for removing rain streaks from a single input image by decomposing it into a rain-free background layer B and a rain-streak layer R. A joint optimization process is used that alternates between removing rain-streak details from B and removing non-streak details from R. The process is assisted by three novel image priors. Observing that rain streaks typically span a narrow range of directions, we first analyze the local gradient statistics in the rain image to identify image regions that are dominated by rain streaks. From these regions, we estimate the dominant rain streak direction and extract a collection of rain-dominated patches. Next, we define two priors on the background layer B, one based on a centralized sparse representation and another based on the estimated rain direction. A third prior is defined on the rain-streak layer R, based on similarity of patches to the extracted rain patches. Both visual and quantitative comparisons demonstrate that our method outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "4aa0f3a526c1ca44ab84ebd2e8fc4dc6",
"text": "Blockchain is so far well-known for its potential applications in financial and banking sectors. However, blockchain as a decentralized and distributed technology can be utilized as a powerful tool for immense daily life applications. Healthcare is one of the prominent applications area among others where blockchain is supposed to make a strong impact. It is generating wide range of opportunities and possibilities in current healthcare systems. Therefore, this paper is all about exploring the potential applications of blockchain technology in current healthcare systems and highlights the most important requirements to fulfill the need of such systems such as trustless and transparent healthcare systems. In addition, this work also presents the challenges and obstacles needed to resolve before the successful adoption of blockchain technology in healthcare systems. Furthermore, we introduce the smart contract for blockchain based healthcare systems which is key for defining the pre-defined agreements among various involved stakeholders.",
"title": ""
},
{
"docid": "bd5b8680feac7b5ff806a6a40b9f73ae",
"text": "Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.",
"title": ""
},
{
"docid": "b6e15d3931080de9a8f92d5b6e4c19e0",
"text": "A low-profile, electrically small antenna with omnidirectional vertically polarized radiation similar to a short monopole antenna is presented. The antenna features less than lambda/40 dimension in height and lambda/10 or smaller in lateral dimension. The antenna is matched to a 50 Omega coaxial line without the need for external matching. The geometry of the antenna is derived from a quarter-wave transmission line resonator fed at an appropriate location to maximize current through the short-circuited end. To improve radiation from the vertical short-circuited pin, the geometry is further modified through superposition of additional resonators placed in a parallel arrangement. The lateral dimension of the antenna is miniaturized by meandering and turning the microstrip lines into form of a multi-arm spiral. The meandering between the short-circuited end and the feed point also facilitates the impedance matching. Through this technique, spurious horizontally polarized radiation is also minimized and a radiation pattern similar to a short dipole is achieved. The antenna is designed, fabricated and measured. Parametric studies are performed to explore further size reduction and performance improvements. Based on the studies, a dual-band antenna with enhanced gain is realized. The measurements verify that the proposed fabricated antennas feature excellent impedance match, omnidirectional radiation in the horizontal plane and low levels of cross-polarization.",
"title": ""
},
{
"docid": "741f73818da4399924daac8e96ded51c",
"text": "Purpose – The purpose of this paper is to look at how knowledge management (KM) has entered into a new phase where consolidation and harmonisation of concepts is required. Some first standards have been published in Europe and Australia in order to foster a common understanding of terms and concepts. The aim of this study was to analyse KM frameworks from research and practice regarding their model elements and try to discover differences and correspondences. Design/methodology/approach – A total of 160 KM frameworks from science, practice, associations and standardization bodies have been collected worldwide. These frameworks have been analysed regarding the use and understanding of the term knowledge, the terms used to describe the knowledge process activities and the factors influencing the success of knowledge management. Quantitative and qualitative content analysis methods have been applied. Findings – The result shows that despite the wide range of terms used in the KM frameworks an underlying consensus was detected regarding the basic categories used to describe the knowledge management activities and the critical success factors of KM. Nevertheless regarding the core term knowledge there is still a need to develop an improved understanding in research and practice. Originality/value – The first quantitative and qualitative analysis of 160 KM frameworks from different origin worldwide.",
"title": ""
},
{
"docid": "78ce06926ea3b2012277755f0916fbb7",
"text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.",
"title": ""
},
{
"docid": "a85496dc96f87ba4f0018ef8bb2c8695",
"text": "The negative capacitance (NC) of ferroelectric materials has paved the way for achieving sub-60-mV/decade switching feature in complementary metal-oxide-semiconductor (CMOS) field-effect transistors, by simply inserting a ferroelectric thin layer in the gate stack. However, in order to utilize the ferroelectric capacitor (as a breakthrough technique to overcome the Boltzmann limit of the device using thermionic emission process), the thickness of the ferroelectric layer should be scaled down to sub-10-nm for ease of integration with conventional CMOS logic devices. In this paper, we demonstrate an NC fin-shaped field-effect transistor (FinFET) with a 6-nm-thick HfZrO ferroelectric capacitor. The performance parameters of NC FinFET such as on-/off-state currents and subthreshold slope are compared with those of the conventional FinFET. Furthermore, a repetitive and reliable steep switching feature of the NC FinFET at various drain voltages is demonstrated.",
"title": ""
},
{
"docid": "413c4d1115e8042cce44308583649279",
"text": "With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.",
"title": ""
},
{
"docid": "f30e54728a10e416d61996c082197f5b",
"text": "This paper describes an efficient and straightforward methodology for OCR-ing and post-correcting Arabic text material on Islamic embryology collected for the COBHUNI project. As the target texts of the project include diverse diachronic stages of the Arabic language, the team of annotators for performing the OCR post-correction requires well-trained experts on language skills. While technical skills are also desirable, highly trained language experts typically lack enough technical knowledge. Furthermore, a relatively small portion of the target texts needed to be OCR-ed, as most of the material was already on some digital form. Thus, the OCR task could only require a small amount of resources in terms of time and work complexity. Both the low technical skills of the annotators and the resource constraints made it necessary for us to find an easy-to-develop and suitable workflow for performing the OCR and post-correction tasks. For the OCR phase, we chose Tesseract Open Source OCR Engine, because it achieves state-of-the-art levels of accuracy. For the post-correction phase, we decided to use the Proofread Page extension of the MediaWiki software, as it strikes a perfect balance between usability and efficiency. The post-correction task was additionally supported by the implementation of an error checker based on simple heuristics. The application of this methodology resulted in the successful and fast OCR-ing and post-correction of a corpus of 36,132 tokens.",
"title": ""
},
{
"docid": "b0c62e2049ea4f8ada0d506e06adb4bb",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
},
{
"docid": "548fb90bf9d665e57ced0547db1477b7",
"text": "In the application of face recognition, eyeglasses could significantly degrade the recognition accuracy. A feasible method is to collect large-scale face images with eyeglasses for training deep learning methods. However, it is difficult to collect the images with and without glasses of the same identity, so that it is difficult to optimize the intra-variations caused by eyeglasses. In this paper, we propose to address this problem in a virtual synthesis manner. The high-fidelity face images with eyeglasses are synthesized based on 3D face model and 3D eyeglasses. Models based on deep learning methods are then trained on the synthesized eyeglass face dataset, achieving better performance than previous ones. Experiments on the real face database validate the effectiveness of our synthesized data for improving eyeglass face recognition performance.",
"title": ""
},
{
"docid": "a7fe7068ce05260603ca697a8e5e8410",
"text": "In this paper, we will introduce our newly developed 3D simulation system for miniature unmanned aerial vehicles (UAVs) navigation and control in GPS-denied environments. As we know, simulation technologies can verify the algorithms and identify potential problems before the actual flight test and to make the physical implementation smoothly and successfully. To enhance the capability of state-of-the-art of research-oriented UAV simulation system, we develop a 3D simulator based on robot operation system (ROS) and a game engine, Unity3D. Unity3D has powerful graphics and can support high-fidelity 3D environments and sensor modeling which is important when we simulate sensing technologies in cluttered and harsh environments. On the other hand, ROS can provide clear software structure and simultaneous operation between hardware devices for actual UAVs. By developing data transmitting interface and necessary sensor modeling techniques, we have successfully glued ROS and Unity together. The integrated simulator can handle real-time multi-UAV navigation and control algorithms, including online processing of a large number of sensor data.",
"title": ""
},
{
"docid": "0165273958cc8385d371024e89f87d15",
"text": "Traditional, persistent data-oriented approaches in computer forensics face some limitations regarding a number of technological developments, e.g., rapidly increasing storage capabilities of hard drives, memory-resident malicious software applications, or the growing use of encryption routines, that make an in-time investigation more and more difficult. In order to cope with these issues, security professionals have started to examine alternative data sources and emphasize the value of volatile system information in RAM more recently. In this paper, we give an overview of the prevailing techniques and methods to collect and analyze a computer's memory. We describe the characteristics, benefits, and drawbacks of the individual solutions and outline opportunities for future research in this evolving field of IT security. Highlights Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "1859b356a614bdffbc009c365173ab1d",
"text": "Anxiety disorders are among the most common psychiatric illnesses, and acupuncture treatment is widely accepted in the clinic without the side effects seen from various medications. We designed a scalp acupuncture treatment protocol by locating two new stimulation areas. The area one is between Yintang (M-HN-3) and Shangxing (DU-23) and Shenting (DU-24), and the area two is between Taiyang (M-HN-9) and Tianchong (GB-9) and Shuaigu (GB-8). By stimulating these two areas with high-frequency continuous electric waves, remarkable immediate and long-term effects for anxiety disorders have been observed in our practice. The first case was a 70-year-old male with general anxiety disorder (GAD) and panic attacks at night. The scalp acupuncture treatment protocol was applied with electric stimulation for 45 minutes once every week. After four sessions of acupuncture treatments, the patient reported that he did not have panic attacks at night and he had no feelings of anxiety during the day. Follow-up 4 weeks later confirmed that he did not have any episodes of panic attacks and he had no anxiety during the day since his last acupuncture treatment. The second case was a 35-year-old male who was diagnosed with posttraumatic stress disorder (PTSD) with a history of providing frontline trauma care as a Combat Medics from the Iraq combat field. He also had 21 broken bones and multiple concussions from his time in the battlefield. He had symptoms of severe anxiety, insomnia, nightmares with flashbacks, irritability, and bad temper. He also had chest pain, back pain, and joint pain due to injuries. The above treatment protocol was performed with 30 minutes of electric stimulation each time in combination with body acupuncture for pain management. After weekly acupuncture treatment for the first two visits, the patient reported that he felt less anxious and that his sleep was getting better with fewer nightmares. After six sessions of acupuncture treatments, the patient completely recovered from PTSD, went back to work, and now lives a healthy and happy family life. The above cases and clinical observation show that the scalp acupuncture treatment protocol with electric stimulation has a significant clinic outcome for GAD, panic disorder and PTSD. The possible mechanism of action of scalp acupuncture on anxiety disorder may be related to overlapping modulatory effects on the cortical structures (orbitofrontal cortex [OFC]) and medial prefrontal cortex [mPFC]) and subcortical/limbic regions (amygdala and hippocampus), and biochemical effect of acupuncture through immunohistochemistry (norepinephrine, serotonin) performed directly to the brain tissue for anxiety disorders.",
"title": ""
},
{
"docid": "3655319a1d2ff7f4bc43235ba02566bd",
"text": "In high-performance systems, stencil computations play a crucial role as they appear in a variety of different fields of application, ranging from partial differential equation solving, to computer simulation of particles’ interaction, to image processing and computer vision. The computationally intensive nature of those algorithms created the need for solutions to efficiently implement them in order to save both execution time and energy. This, in combination with their regular structure, has justified their widespread study and the proposal of largely different approaches to their optimization.\n However, most of these works are focused on aggressive compile time optimization, cache locality optimization, and parallelism extraction for the multicore/multiprocessor domain, while fewer works are focused on the exploitation of custom architectures to further exploit the regular structure of Iterative Stencil Loops (ISLs), specifically with the goal of improving power efficiency.\n This work introduces a methodology to systematically design power-efficient hardware accelerators for the optimal execution of ISL algorithms on Field-programmable Gate Arrays (FPGAs). As part of the methodology, we introduce the notion of Streaming Stencil Time-step (SST), a streaming-based architecture capable of achieving both low resource usage and efficient data reuse thanks to an optimal data buffering strategy, and we introduce a technique called SSTs queuing that is capable of delivering a pseudolinear execution time speedup with constant bandwidth.\n The methodology has been validated on significant benchmarks on a Virtex-7 FPGA using the Xilinx Vivado suite. Results demonstrate how the efficient usage of the on-chip memory resources realized by an SST allows one to treat problem sizes whose implementation would otherwise not be possible via direct synthesis of the original, unmanipulated code via High-Level Synthesis (HLS). We also show how the SSTs queuing effectively ensures a pseudolinear throughput speedup while consuming constant off-chip bandwidth.",
"title": ""
},
{
"docid": "b66301704785cb8bc44ca6cb584b8806",
"text": "For many software projects, bug tracking systems play a central role in supporting collaboration between the developers and the users of the software. To better understand this collaboration and how tool support can be improved, we have quantitatively and qualitatively analysed the questions asked in a sample of 600 bug reports from the MOZILLA and ECLIPSE projects. We categorised the questions and analysed response rates and times by category and project. Our results show that the role of users goes beyond simply reporting bugs: their active and ongoing participation is important for making progress on the bugs they report. Based on the results, we suggest four ways in which bug tracking systems can be improved.",
"title": ""
},
{
"docid": "e3cce1cb8d46721da50560ffdf1a92c6",
"text": "BACKGROUND\nMinimalist shoes have gained popularity recently because it is speculated to strengthen the foot muscles and foot arches, which may help to resist injuries. However, previous studies provided limited evidence supporting the link between changes in muscle size and footwear transition. Therefore, this study sought to examine the effects of minimalist shoes on the intrinsic and extrinsic foot muscle volume in habitual shod runners. The relationship between participants' compliance with the minimalist shoes and changes in muscle õvolume was also evaluated.\n\n\nMETHODS\nTwenty habitual shod runners underwent a 6-month self-monitoring training program designed for minimalist shoe transition. Another 18 characteristics-matched shod runners were also introduced with the same program but they maintained running practice with standard shoes. Runners were monitored using an online surveillance platform during the program. We measured overall intrinsic and extrinsic foot muscle volume before and after the program using MRI scans.\n\n\nFINDINGS\nRunners in the experimental group exhibited significantly larger leg (P=0.01, Cohen's d=0.62) and foot (P<0.01, Cohen's d=0.54) muscle after transition. Foot muscle growth was mainly contributed by the forefoot (P<0.01, Cohen's d=0.64) but not the rearfoot muscle (P=0.10, Cohen's d=0.30). Leg and foot muscle volume of runners in the control group remained similar after the program (P=0.33-0.95). A significant positive correlation was found between participants' compliance with the minimalist shoes and changes in leg muscle volume (r=0.51; P=0.02).\n\n\nINTERPRETATION\nHabitual shod runners who transitioned to minimalist shoes demonstrated significant increase in leg and foot muscle volume. Additionally, the increase in leg muscle volume was significantly correlated associated with the compliance of minimalist shoe use.",
"title": ""
},
{
"docid": "7568cb435d0211248e431d865b6a477e",
"text": "We propose prosody embeddings for emotional and expressive speech synthesis networks. The proposed methods introduce temporal structures in the embedding networks, thus enabling fine-grained control of the speaking style of the synthesized speech. The temporal structures can be designed either on the speech side or the text side, leading to different control resolutions in time. The prosody embedding networks are plugged into end-to-end speech synthesis networks and trained without any other supervision except for the target speech for synthesizing. It is demonstrated that the prosody embedding networks learned to extract prosodic features. By adjusting the learned prosody features, we could change the pitch and amplitude of the synthesized speech both at the frame level and the phoneme level. We also introduce the temporal normalization of prosody embeddings, which shows better robustness against speaker perturbations during prosody transfer tasks.",
"title": ""
}
] |
scidocsrr
|
80c4f4c108fd6c075a1d8e50ee7b0fb8
|
Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey
|
[
{
"docid": "83355e7d2db67e42ec86f81909cfe8c1",
"text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.",
"title": ""
},
{
"docid": "4d66a85651a78bfd4f7aba290c21f9a7",
"text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.",
"title": ""
}
] |
[
{
"docid": "5c3358aa3d9a931ba7c9186b1f5a2362",
"text": "Compared with word-level and sentence-level convolutional neural networks (ConvNets), the character-level ConvNets has a better applicability for misspellings and typos input. Due to this, recent researches for text classification mainly focus on character-level ConvNets. However, while the majority of these researches employ English corpus for the character-level text classification, few researches have been done using Chinese corpus. This research hopes to bridge this gap, exploring character-level ConvNets for Chinese corpus test classification. We have constructed a large-scale Chinese dataset, and the result shows that character-level ConvNets works better on Chinese character dataset than its corresponding pinyin format dataset, which is the general solution in previous researches. This is the first time that character-level ConvNets has been applied to Chinese character dataset for text classification problem.",
"title": ""
},
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "f63da8e7659e711bcb7a148ea12a11f2",
"text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.",
"title": ""
},
{
"docid": "1a9670cc170343073fba2a5820619120",
"text": "Occlusions present a great challenge for pedestrian detection in practical applications. In this paper, we propose a novel approach to simultaneous pedestrian detection and occlusion estimation by regressing two bounding boxes to localize the full body as well as the visible part of a pedestrian respectively. For this purpose, we learn a deep convolutional neural network (CNN) consisting of two branches, one for full body estimation and the other for visible part estimation. The two branches are treated differently during training such that they are learned to produce complementary outputs which can be further fused to improve detection performance. The full body estimation branch is trained to regress full body regions for positive pedestrian proposals, while the visible part estimation branch is trained to regress visible part regions for both positive and negative pedestrian proposals. The visible part region of a negative pedestrian proposal is forced to shrink to its center. In addition, we introduce a new criterion for selecting positive training examples, which contributes largely to heavily occluded pedestrian detection. We validate the effectiveness of the proposed bi-box regression approach on the Caltech and CityPersons datasets. Experimental results show that our approach achieves promising performance for detecting both non-occluded and occluded pedestrians, especially heavily occluded ones.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "aecaa8c028c4d1098d44d755344ad2fc",
"text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.",
"title": ""
},
{
"docid": "c45b962006b2bb13ab57fe5d643e2ca6",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "c330e97f4c7c3478670e55991ac2293c",
"text": "The MoveLab was an educational research intervention centering on a community of African American and Hispanic girls as they began to transform their self-concept in relation to computing and dance while creating technology enhanced dance performances. Students within underrepresented populations in computing often do not perceive the identity of a computer scientist as aligning with their interests or value system, leading to rejection of opportunities to participate within the discipline. To engage diverse populations in computing, we need to better understand how to support students in navigating conflicts between identities with computing and their personal interest and values. Using the construct of self-concept, we observed students in the workshop creating both congruence and dissension between their self-concept and computing. We found that creating multiple roles for participation, fostering a socially supportive community, and integrating student values within the curriculum led to students forming congruence between their self-concept and the disciplines of computing and dance.",
"title": ""
},
{
"docid": "f7792dbc29356711c2170d5140030142",
"text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.",
"title": ""
},
{
"docid": "01c6476bfa806af6c35898199ad9c169",
"text": "This paper presents nonlinear tracking control systems for a quadrotor unmanned aerial vehicle under the influence of uncertainties. Assuming that there exist unstructured disturbances in the translational dynamics and the attitude dynamics, a geometric nonlinear adaptive controller is developed directly on the special Euclidean group. In particular, a new form of an adaptive control term is proposed to guarantee stability while compensating the effects of uncertainties in quadrotor dynamics. A rigorous mathematical stability proof is given. The desirable features are illustrated by numerical example and experimental results of aggressive maneuvers.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "2f8f1f2db01eeb9a47591e77bb1c835a",
"text": "We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.",
"title": ""
},
{
"docid": "ec0d1addabab76d9c2bd044f0bfe3153",
"text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.",
"title": ""
},
{
"docid": "76c7b343d2f03b64146a0d6ed2d60668",
"text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.",
"title": ""
},
{
"docid": "f8ba12d3fd6ebf65429a2ce5f5143dbd",
"text": "The contour-guided color palette (CCP) is proposed for robust image segmentation. It efficiently integrates contour and color cues of an image. To find representative colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift (MS) algorithm in the sampled color space to achieve an image-dependent color palette. This color palette provides a preliminary segmentation in the spatial domain, which is further fine-tuned by post-processing techniques such as leakage avoidance, fake boundary removal, and small region mergence. Segmentation performances of CCP and MS are compared and analyzed. While CCP offers an acceptable standalone segmentation result, it can be further integrated into the framework of layered spectral segmentation to produce a more robust segmentation. The superior performance of CCP-based segmentation algorithm is demonstrated by experiments on the Berkeley Segmentation Dataset.",
"title": ""
},
{
"docid": "a21f04b6c8af0b38b3b41f79f2661fa6",
"text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.",
"title": ""
},
{
"docid": "34ba1323c4975a566f53e2873231e6ad",
"text": "This paper describes the motivation, the realization, and the experience of incorporating simulation and hardware implementation into teaching computer organization and architecture to computer science students. It demonstrates that learning by doing has helped students to truly understand how a computer is constructed and how it really works in practice. Correlated with textbook material, a set of simulation and implementation projects were created on the basis of the work that students had done in previous homework and laboratory activities. Students can thus use these designs as building blocks for completing more complex projects at a later time. The projects cover a wide range of topics from simple adders up to ALU's and CPU's. These processors operate in a virtual manner on certain short assembly-language programs. Specifically, this paper shares the experience of using simulation tools (Alterareg Quartus II) and reconfigurable hardware prototyping platforms (Alterareg UP2 development boards)",
"title": ""
},
{
"docid": "8e1befc4318a2dd32d59acac49e2374c",
"text": "The use of Social Network Sites (SNS) is increasing nowadays especially by the younger generations. The availability of SNS allows users to express their interests, feelings and share daily routine. Many researchers prove that using user-generated content (UGC) in a correct way may help determine people's mental health levels. Mining the UGC could help to predict the mental health levels and depression. Depression is a serious medical illness, which interferes most with the ability to work, study, eat, sleep and having fun. However, from the user profile in SNS, we can collect all the information that relates to person's mood, and negativism. In this research, our aim is to investigate how SNS user's posts can help classify users according to mental health levels. We propose a system that uses SNS as a source of data and screening tool to classify the user using artificial intelligence according to the UGC on SNS. We created a model that classify the UGC using two different classifiers: Support Vector Machine (SVM), and Naïve Bayes.",
"title": ""
},
{
"docid": "601488a8e576d465a0bddd65a937c5c8",
"text": "Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.",
"title": ""
},
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
}
] |
scidocsrr
|
4594d2f085929dd9ae7bbe4f815a8a93
|
Next Generation Cloud Computing: New Trends and Research Directions
|
[
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ef173f7c32074bfebeab524354de1ec",
"text": "Text classification is an important problem with many applications. Traditional approaches represent text as a bagof-words and build classifiers based on this representation. Rather than words, entity phrases, the relations between the entities, as well as the types of the entities and relations carry much more information to represent the texts. This paper presents a novel text as network classification framework, which introduces 1) a structured and typed heterogeneous information networks (HINs) representation of texts, and 2) a meta-path based approach to link texts. We show that with the new representation and links of texts, the structured and typed information of entities and relations can be incorporated into kernels. Particularly, we develop both simple linear kernel and indefinite kernel based on metapaths in the HIN representation of texts, where we call them HIN-kernels. Using Freebase, a well-known world knowledge base, to construct HIN for texts, our experiments on two benchmark datasets show that the indefinite HIN-kernel based on weighted meta-paths outperforms the state-of-theart methods and other HIN-kernels.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
}
] |
[
{
"docid": "db54705e3d975b6abba54a854e3e1158",
"text": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.",
"title": ""
},
{
"docid": "268e0e06a23f495cc36958dafaaa045a",
"text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.",
"title": ""
},
{
"docid": "2d20574f353950f7805e85c55e023d37",
"text": "Stress has effect on speech characteristics and can influence the quality of speech. In this paper, we study the effect of SleepDeprivation (SD) on speech characteristics and classify Normal Speech (NS) and Sleep Deprived Speech (SDS). One of the indicators of sleep deprivation is flattened voice. We examine pitch and harmonic locations to analyse flatness of voice. To investigate, we compute the spectral coefficients that can capture the variations of pitch and harmonic patterns. These are derived using Two-Layer Cascaded-Subband Filter spread according to the pitch and harmonic frequency scale. Hidden Markov Model (HMM) is employed for statistical modeling. We use DCIEM map task corpus to conduct experiments. The analysis results show that SDS has less variation of pitch and harmonic pattern than NS. In addition, we achieve the relatively high accuracy for classification of Normal Speech (NS) and Sleep Deprived Speech (SDS) using proposed spectral coefficients.",
"title": ""
},
{
"docid": "43efacf740f920fb621cf870cb9102ce",
"text": "Vehicular Ad hoc Network (VANETs) help improve efficiency of security applications and road safety. Using the information exchanged between vehicles, the latter can warn drivers about dangerous situations. Detection and warning about such situations require reliable communication between vehicles. In fact, the IEEE 802.11p (WAVE: Wireless Access in the Vehicular Environment) was proposed to support the rapid exchange of data between the vehicles. Several Medium Access Control (MAC) protocols were also introduced for safety application VANET. In this paper, we present the different MAC basic protocols in VANET. We used simulation to compare and analyze their performances.",
"title": ""
},
{
"docid": "f1a7d6f8ae1e6b9ef837be3835f5b750",
"text": "of the 5th and 6th DIMACS Implementation Challenges, Goldwasser Johnson, and McGeoch (eds), American Mathematical Society, 2002. A Theoretician's Guide to the Experimental Analysis of Algorithms David S. Johnson AT&T Labs { Research http://www.research.att.com/ dsj/ November 25, 2001 Abstract This paper presents an informal discussion of issues that arise when one attempts to analyze algorithms experimentally. It is based on lessons learned by the author over the course of more than a decade of experimentation, survey paper writing, refereeing, and lively discussions with other experimentalists. Although written from the perspective of a theoretical computer scientist, it is intended to be of use to researchers from all elds who want to study algorithms experimentally. It has two goals: rst, to provide a useful guide to new experimentalists about how such work can best be performed and written up, and second, to challenge current researchers to think about whether their own work might be improved from a scienti c point of view. With the latter purpose in mind, the author hopes that at least a few of his recommendations will be considered controversial.",
"title": ""
},
{
"docid": "cdc5655770d58139ee3fb548022be2d5",
"text": "We propose a data mining approach to predict human wine taste preferences that is based on easily available analytical tests at the certification step. A large dataset (when compared to other studies in this domain) is considered, with white and red vinho verde samples (from Portugal). Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful to support the oenologist wine tasting evaluations and improve wine production. Furthermore, similar techniques can help in target marketing by modeling consumer tastes from niche markets.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "89865dbb80fcb2d9c5d4d4fe4fe10b83",
"text": "Elaborate efforts have been made to eliminate fake markings and refine <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula>-markings in the existing modified or improved Karp–Miller trees for various classes of unbounded Petri nets since the late 1980s. The main issues fundamentally are incurred due to the generation manners of the trees that prematurely introduce some potentially unbounded markings with <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols and keep their growth into new ones. Aiming at addressing them, this work presents a non-Karp–Miller tree called a lean reachability tree (LRT). First, a sufficient and necessary condition of the unbounded places and some reachability properties are established to reveal the features of unbounded nets. Then, we present an LRT generation algorithm with a sufficiently enabling condition (SEC). When generating a tree, SEC requires that the components of a covering node are not replaced by <inline-formula> <tex-math notation=\"LaTeX\">${\\omega }$ </tex-math></inline-formula> symbols, but continue to grow until any transition on an output path of an unbounded place has been branch-enabled at least once. In return, no fake marking is produced and no legal marking is lost during the tree generation. We prove that LRT can faithfully express by folding, instead of equivalently representing, the reachability set of an unbounded net. Also, some properties of LRT are examined and a sufficient condition of deadlock existence based on it is given. The case studies show that LRT outperforms the latest modified Karp–Miller trees in terms of size, expressiveness, and applicability. It can be applied to the analysis of the emerging discrete event systems with infinite states.",
"title": ""
},
{
"docid": "9b793826ceb4891f95c7e8b2ef7d72b4",
"text": "Machine-to-machine (M2M) communication, also referred to as Internet of Things (IoT), is a global network of devices such as sensors, actuators, and smart appliances which collect information, and can be controlled and managed in real time over the Internet. Due to their universal coverage, cellular networks and the Internet together offer the most promising foundation for the implementation of M2M communication. With the worldwide deployment of the fourth generation (4G) of cellular networks, the long-term evolution (LTE) and LTE-advanced standards have defined several quality-of-service classes to accommodate the M2M traffic. However, cellular networks are mainly optimized for human-to-human (H2H) communication. The characteristics of M2M traffic are different from the human-generated traffic and consequently create sever problems in both radio access and the core networks (CNs). This survey on M2M communication in LTE/LTE-A explores the issues, solutions, and the remaining challenges to enable and improve M2M communication over cellular networks. We first present an overview of the LTE networks and discuss the issues related to M2M applications on LTE. We investigate the traffic issues of M2M communications and the challenges they impose on both access channel and traffic channel of a radio access network and the congestion problems they create in the CN. We present a comprehensive review of the solutions for these problems which have been proposed in the literature in recent years and discuss the advantages and disadvantages of each method. The remaining challenges are also discussed in detail.",
"title": ""
},
{
"docid": "a64bf1840a6f7d82d5ca4dc10bf87453",
"text": "Cloud-based wireless networking system applies centralized resource pooling to improve operation efficiency. Fog-based wireless networking system reduces latency by placing processing units in the network edge. Confluence of fog and cloud design paradigms in 5G radio access network will better support diverse applications. In this article, we describe the recent advances in fog radio access network (F-RAN) research, hybrid fog-cloud architecture, and system design issues. Furthermore, the GPP platform facilitates the confluence of computational and communications processing. Through observations from GPP platform testbed experiments and simulations, we discuss the opportunities of integrating the GPP platform with F-RAN architecture.",
"title": ""
},
{
"docid": "06525bcc03586c8d319f5d6f1d95b852",
"text": "Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",
"title": ""
},
{
"docid": "d89ba95eb3bd7aca4a7acb17be973c06",
"text": "An UWB elliptical slot antenna embedded with open-end slit on the tuning stub or parasitic strip on the aperture for achieving the band-notch characteristics has been proposed in this conference. Experimental results have also confirmed band-rejection capability for the proposed antenna at the desired band, as well as nearly omni-direction radiation features is still preserved. Finally, how to shrink the geometry dimensions of the UWB antenna will be investigated in the future.",
"title": ""
},
{
"docid": "134e5a0da9a6aa9b3c5e10a69803c3a3",
"text": "The objectives of this study were to determine the prevalence of overweight and obesity in Turkey, and to investigate their association with age, gender, and blood pressure. A crosssectional population-based study was performed. A total of 20,119 inhabitants (4975 women and 15,144 men, age > 20 years) from 11 Anatolian cities in four geographic regions were screened for body weight, height, and systolic and diastolic blood pressure between the years 1999 and 2000. The overall prevalence rate of overweight was 25.0% and of obesity was 19.4%. The prevalence of overweight among women was 24.3% and obesity 24.6%; 25.9% of men were overweight, and 14.4% were obese. Mean body mass index (BMI) of the studied population was 27.59 +/- 4.61 kg/m(2). Mean systolic and diastolic blood pressure for women were 131.0 +/- 41.0 and 80.2 +/- 16.3 mm Hg, and for men 135.0 +/- 27.3 and 83.2 +/- 16.0 mm Hg. There was a positive linear correlation between BMI and blood pressure, and between age and blood pressure in men and women. Obesity and overweight are highly prevalant in Turkey, and they constitute independent risk factors for hypertension.",
"title": ""
},
{
"docid": "13c8d93a834e4a82f229239dc26d8775",
"text": "The popularity of Twitter for information discovery, coupled with the automatic shortening of URLs to save space, given the 140 character limit, provides cybercriminals with an opportunity to obfuscate the URL of a malicious Web page within a tweet. Once the URL is obfuscated, the cybercriminal can lure a user to click on it with enticing text and images before carrying out a cyber attack using a malicious Web server. This is known as a drive-by download. In a drive-by download a user's computer system is infected while interacting with the malicious endpoint, often without them being made aware the attack has taken place. An attacker can gain control of the system by exploiting unpatched system vulnerabilities and this form of attack currently represents one of the most common methods employed. In this paper we build a machine learning model using machine activity data and tweet metadata to move beyond post-execution classification of such URLs as malicious, to predict a URL will be malicious with 0.99 F-measure (using 10-fold cross-validation) and 0.833 (using an unseen test set) at 1 s into the interaction with the URL. Thus, providing a basis from which to kill the connection to the server before an attack has completed and proactively blocking and preventing an attack, rather than reacting and repairing at a later date.",
"title": ""
},
{
"docid": "46b5e1898dba479b7158ce5c9c0b94a8",
"text": "Finding a parking place in a busy city centre is often a frustrating task for many drivers; time and fuel are wasted in the quest for a vacant spot and traffic in the area increases due to the slow moving vehicles circling around. In this paper, we present the results of a survey on the needs of drivers from parking infrastructures from a smart services perspective. As smart parking systems are becoming a necessity in today's urban areas, we discuss the latest trends in parking availability monitoring, parking reservation and dynamic pricing schemes. We also examine how these schemes can be integrated forming technologically advanced parking infrastructures whose aim is to benefit both the drivers and the parking operators alike.",
"title": ""
},
{
"docid": "3e974f6838a652cf19e4dac68b119286",
"text": "Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.",
"title": ""
},
{
"docid": "5dfc0ec364055f79d19ee8cf0b0cfeff",
"text": "Cancer cachexia is a common problem among advanced cancer patients. A mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine (HMB/Arg/Gln) previously showed activity for increasing lean body mass (LBM) among patients with cancer cachexia. Therefore a phase III trial was implemented to confirm this activity. Four hundred seventy-two advanced cancer patients with between 2% and 10% weight loss were randomized to a mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine or an isonitrogenous, isocaloric control mixture taken twice a day for 8 weeks. Lean body mass was estimated by bioimpedance and skin-fold measurements. Body plethysmography was used when available. Weight, the Schwartz Fatigue Scale, and the Spitzer Quality of Life Scale were also measured. Only 37% of the patients completed protocol treatment. The majority of the patient loss was because of patient preference (45% of enrolled patients). However, loss of power was not an issue because of the planned large target sample size. Based on an intention to treat analysis, there was no statistically significant difference in the 8-week lean body mass between the two arms. The secondary endpoints were also not significantly different between the arms. Based on the results of the area under the curve (AUC) analysis, patients receiving HMB/Arg/Gln had a strong trend higher LBM throughout the study as measured by both bioimpedance (p = 0.08) and skin-fold measurements (p = 0.08). Among the subset of patients receiving concurrent chemotherapy, there were again no significant differences in the endpoints. The secondary endpoints were also not significantly different between the arms. This trial was unable to adequately test the ability of β-hydroxy β-methylbutyrate, glutamine, and arginine to reverse or prevent lean body mass wasting among cancer patients. Possible contributing factors beyond the efficacy of the intervention were the inability of patients to complete an 8-week course of treatment and return in a timely fashion for follow-up assessment, and because the patients may have only had weight loss possible not related to cachexia, but other causes of weight loss, such as decreased appetite. However, there was a strong trend towards an increased body mass among patients taking the Juven® compound using the secondary endpoint of AUC.",
"title": ""
},
{
"docid": "80ca2b3737895e9222346109ac092637",
"text": "The common ground between figurative language and humour (in the form of jokes) is what Koestler (1964) termed the bisociation of ideas. In both jokes and metaphors, two disparate concepts are brought together, but the nature and the purpose of this conjunction is different in each case. This paper focuses on this notion of boundaries and attempts to go further by asking the question “when does a metaphor become a joke?”. More specifically, the main research questions of the paper are: (a) How do speakers use metaphor in discourse for humorous purposes? (b) What are the (metaphoric) cognitive processes that relate to the creation of humour in discourse? (c) What does the study of humour in discourse reveal about the nature of metaphoricity? This paper answers these questions by examining examples taken from a three-hour conversation, and considers how linguistic theories of humour (Raskin, 1985; Attardo and Raskin, 1991; Attardo, 1994; 2001) and cognitive theories of metaphor and blending (Lakoff and Johnson, 1980; Fauconnier and Turner, 2002) can benefit from each other. Boundaries in Humour and Metaphor The goal of this paper is to explore the relationship between metaphor (and, more generally, blending) and humour, in order to attain a better understanding of the cognitive processes that are involved or even contribute to laughter in discourse. This section will present briefly research in both areas and will identify possible common ground between the two. More specifically, the notion of boundaries will be explored in both areas. The following section explores how metaphor can be used for humorous purposes in discourse by applying relevant theories of humour and metaphor to conversational data. Linguistic theories of humour highlight the importance of duality and tension in humorous texts. Koestler (1964: 51) in discussing comic creativity notes that: The sudden bisociation of an idea or event with two habitually incompatible matrices will produce a comic effect, provided that the narrative, the semantic pipeline, carries the right kind of emotional tension. When the pipe is punctured, and our expectations are fooled, the now redundant tension gushes out in laughter, or is spilled in the gentler form of the sou-rire [my emphasis]. This oft-quoted passage introduces the basic themes and mechanisms that later were explored extensively within contemporary theories of humour: a humorous text must relate to two different and opposing in some way scenarios; this duality is not",
"title": ""
}
] |
scidocsrr
|
a8f3360c7be5cfacb5d0ef790526247a
|
Formalizing a Systematic Review Updating Process
|
[
{
"docid": "e79777797fa3cc1ef4650480a7344c40",
"text": "Synopsis A framework is presented which assists requirements engineers to choose methods for requirements acquisition. Practitioners are often unaware of the range of methods available. Even when practitioners are aware, most do not foresee the need to use several methods to acquire complete and accurate requirements. One reason for this is the lack of guidelines for method selection. The ACRE framework sets out to overcome these limitations. Method selection is achieved using questions driven from a set of facets which define the strengths and weaknesses of each method. The framework is presented as guidelines for requirements engineering practitioners. It has undergone some evaluation through its presentation to highly-experienced requirements engineers. Some results from this evaluation have been incorporated into the version of ACRE presented in the paper.",
"title": ""
}
] |
[
{
"docid": "ba7fe17912c942690c44bc81ce772c22",
"text": "[1] We present here a new InSAR persistent scatterer (PS) method for analyzing episodic crustal deformation in non-urban environments, with application to volcanic settings. Our method for identifying PS pixels in a series of interferograms is based primarily on phase characteristics and finds low-amplitude pixels with phase stability that are not identified by the existing amplitude-based algorithm. Our method also uses the spatial correlation of the phases rather than a well-defined phase history so that we can observe temporally-variable processes, e.g., volcanic deformation. The algorithm involves removing the residual topographic component of flattened interferogram phase for each PS, then unwrapping the PS phases both spatially and temporally. Our method finds scatterers with stable phase characteristics independent of amplitudes associated with man-made objects, and is applicable to areas where conventional InSAR fails due to complete decorrelation of the majority of scatterers, yet a few stable scatterers are present.",
"title": ""
},
{
"docid": "2536596ecba0498e7dbcb097695171b0",
"text": "How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.",
"title": ""
},
{
"docid": "4b03aeb6c56cc25ce57282279756d1ff",
"text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.",
"title": ""
},
{
"docid": "cf7b17b690258dc50ec12bfbd9de232d",
"text": "In this paper, we propose a novel method for visual object tracking called HMMTxD. The method fuses observations from complementary out-of-the box trackers and a detector by utilizing a hidden Markov model whose latent states correspond to a binary vector expressing the failure of individual trackers. The Markov model is trained in an unsupervised way, relying on an online learned detector to provide a source of tracker-independent information for a modified BaumWelch algorithm that updates the model w.r.t. the partially annotated data. We show the effectiveness of the proposed method on combination of two and three tracking algorithms. The performance of HMMTxD is evaluated on two standard benchmarks (CVPR2013 and VOT) and on a rich collection of 77 publicly available sequences. The HMMTxD outperforms the state-of-the-art, often significantly, on all datasets in almost all criteria.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "07817eb2722fb434b1b8565d936197cf",
"text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.",
"title": ""
},
{
"docid": "ba314edceb1b8ac00f94ad0037bd5b8e",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "4eeb20c4a5cc259be1355b04813223f6",
"text": "Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.",
"title": ""
},
{
"docid": "b1d348e2095bd7054cc11bd84eb8ccdc",
"text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "7bbffa53f71207f0f218a09f18586541",
"text": "Myelotoxicity induced by chemotherapy may become life-threatening. Neutropenia may be prevented by granulocyte colony-stimulating factors (GCSF), and epoetin may prevent anemia, but both cause substantial side effects and increased costs. According to non-established data, wheat grass juice (WGJ) may prevent myelotoxicity when applied with chemotherapy. In this prospective matched control study, 60 patients with breast carcinoma on chemotherapy were enrolled and assigned to an intervention or control arm. Those in the intervention arm (A) were given 60 cc of WGJ orally daily during the first three cycles of chemotherapy, while those in the control arm (B) received only regular supportive therapy. Premature termination of treatment, dose reduction, and starting GCSF or epoetin were considered as \"censoring events.\" Response rate to chemotherapy was calculated in patients with evaluable disease. Analysis of the results showed that five censoring events occurred in Arm A and 15 in Arm B (P = 0.01). Of the 15 events in Arm B, 11 were related to hematological events. No reduction in response rate was observed in patients who could be assessed for response. Side effects related to WGJ were minimal, including worsening of nausea in six patients, causing cessation of WGJ intake. In conclusion, it was found that WGJ taken during FAC chemotherapy may reduce myelotoxicity, dose reductions, and need for GCSF support, without diminishing efficacy of chemotherapy. These preliminary results need confirmation in a phase III study.",
"title": ""
},
{
"docid": "b9e7fedbc42f815b35351ec9a0c31b33",
"text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ",
"title": ""
},
{
"docid": "af8fbdfbc4c4958f69b3936ff2590767",
"text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.",
"title": ""
},
{
"docid": "3d490d7d30dcddc3f1c0833794a0f2df",
"text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively",
"title": ""
},
{
"docid": "c7eb67093a6f00bec0d96607e6384378",
"text": "Two primary simulations have been developed and are being updated for the Mars Smart Lander Entry, Descent, and Landing (EDL). The high fidelity engineering end-to-end EDL simulation that is based on NASA Langley’s Program to Optimize Simulated Trajectories (POST) and the end-to-end real-time, hardware-in-the-loop simulation test bed, which is based on NASA JPL’s Dynamics Simulator for Entry, Descent and Surface landing (DSENDS). This paper presents the status of these Mars Smart Lander EDL end-to-end simulations at this time. Various models, capabilities, as well as validation and verification for these simulations are discussed.",
"title": ""
},
{
"docid": "046148901452aefdc5a14357ed89cbd3",
"text": "Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.",
"title": ""
},
{
"docid": "5462d51955d2eaaa25fd6ff4d71b3f40",
"text": "2 \"Generations of scientists may yet have to come and go before the question of the origin of life is finally solved. That it will be solved eventually is as certain as anything can ever be amid the uncertainties that surround us.\" 1. Introduction How, where and when did life appear on Earth? Although Charles Darwin was reluctant to address these issues in his books, in a letter sent on February 1st, 1871 to his friend Joseph Dalton Hooker he wrote in a now famous paragraph that \"it is often said that all the conditions for the first production of a living being are now present, which could ever have been present. But if (and oh what a big if) we could conceive in some warm little pond with all sort of ammonia and phosphoric salts,-light, heat, electricity present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed...\" (Darwin, 1871). Darwin's letter summarizes in a nutshell not only his ideas on the emergence of life, but also provides considerable insights on the views on the chemical nature of the basic biological processes that were prevalent at the time in many scientific circles. Although Friedrich Miescher had discovered nucleic acids (he called them nuclein) in 1869 (Dahm, 2005), the deciphering of their central role in genetic processes would remain unknown for almost another a century. In contrast, the roles played by proteins in manifold biological processes had been established. Equally significant, by the time Darwin wrote his letter major advances had been made in the understanding of the material basis of life, which for a long time had been considered to be fundamentally different from inorganic compounds. The experiments of Friedrich Wöhler, Adolph Strecker and Aleksandr Butlerov, who had demonstrated independently the feasibility of the laboratory synthesis of urea, alanine, and sugars, respectively, from simple 3 starting materials were recognized as a demonstration that the chemical gap separating organisms from the non-living was not insurmountable. But how had this gap first been bridged? The idea that life was an emergent feature of nature has been widespread since the nineteenth century. The major breakthrough that transformed the origin of life from pure speculation into workable and testable research models were proposals, suggested independently, in …",
"title": ""
},
{
"docid": "c273620e05cc5131e8c6d58b700a0aab",
"text": "Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "c68196f826f2afb61c13a0399d921421",
"text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.",
"title": ""
}
] |
scidocsrr
|
b5f96f56c07a9fde786dd82b27bb45cb
|
Solidus: An Incentive-compatible Cryptocurrency Based on Permissionless Byzantine Consensus
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
}
] |
[
{
"docid": "02f28b1237b88471b0d96e5ff3871dc4",
"text": "Data mining is becoming increasingly important since the size of databases grows even larger and the need to explore hidden rules from the databases becomes widely recognized. Currently database systems are dominated by relational database and the ability to perform data mining using standard SQL queries will definitely ease implementation of data mining. However the performance of SQL based data mining is known to fall behind specialized implementation and expensive mining tools being on sale. In this paper we present an evaluation of SQL based data mining on commercial RDBMS (IBM DB2 UDB EEE). We examine some techniques to reduce I/O cost by using View and Subquery. Those queries can be more than 6 times faster than SETM SQL query reported previously. In addition, we have made performance evaluation on parallel database environment and compared the performance result with commercial data mining tool (IBM Intelligent Miner). We prove that SQL based data mining can achieve sufficient performance by the utilization of SQL query customization and database tuning.",
"title": ""
},
{
"docid": "185dd20e40c5ed4784ab5e92dd85f639",
"text": "Bayesian methods have become widespread in marketing literature. We review the essence of the Bayesian approach and explain why it is particularly useful for marketing problems. While the appeal of the Bayesian approach has long been noted by researchers, recent developments in computational methods and expanded availability of detailed marketplace data has fueled the growth in application of Bayesian methods in marketing. We emphasize the modularity and flexibility of modern Bayesian approaches. The usefulness of Bayesian methods in situations in which there is limited information about a large number of units or where the information comes from different sources is noted. We include an extensive discussion of open issues and directions for future research. (Bayesian Statistics; Decision Theory; Marketing Models; Critical Review)",
"title": ""
},
{
"docid": "a0b147e6baae3ea7622446da0b8d8e26",
"text": "The Web has come a long way since its invention by Berners-Lee, when it focused essentially on visualization and presentation of content for human consumption (Syntactic Web), to a Web providing meaningful content, facilitating the integration between people and machines (Semantic Web). This paper presents a survey of different tools that provide the enrichment of the Web with understandable annotation, in order to make its content available and interoperable between systems. We can group Semantic Annotation tools into the diverse dimensions: dynamicity, storage, information extraction process, scalability and customization. The analysis of the different annotation tools shows that (semi-)automatic and automatic systems aren't as efficient as needed without human intervention and will continue to evolve to solve the challenge. Microdata, RDFa and the new HTML5 standard will certainly bring new contributions to this issue.",
"title": ""
},
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "53e6fe645eb83bcc0f86638ee7ce5578",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "6610f89ba1776501d6c0d789703deb4e",
"text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.",
"title": ""
},
{
"docid": "cd3c56e7e13a23e62986d40630f5a207",
"text": "The prediction of cellular function from a genotype is a fundamental goal in biology. For metabolism, constraint-based modelling methods systematize biochemical, genetic and genomic knowledge into a mathematical framework that enables a mechanistic description of metabolic physiology. The use of constraint-based approaches has evolved over ~30 years, and an increasing number of studies have recently combined models with high-throughput data sets for prospective experimentation. These studies have led to validation of increasingly important and relevant biological predictions. As reviewed here, these recent successes have tangible implications in the fields of microbial evolution, interaction networks, genetic engineering and drug discovery.",
"title": ""
},
{
"docid": "e2a605f5c22592bd5ca828d4893984be",
"text": "Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract humanunderstandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.",
"title": ""
},
{
"docid": "d880535f198a1f0a26b18572f674b829",
"text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.",
"title": ""
},
{
"docid": "3e7e40f82ebb83b4314c974334c8ce0c",
"text": "Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is <inline-formula><tex-math notation=\"LaTeX\">$<.004$</tex-math><alternatives> <inline-graphic xlink:href=\"martinez-ieq1-2772922.gif\"/></alternatives></inline-formula>, cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).",
"title": ""
},
{
"docid": "d5abd8f68a9f77ed84ec1381584357a4",
"text": "In this paper, we study how to test the intelligence of an autonomous vehicle. Comprehensive testing is crucial to both vehicle manufactories and customers. Existing testing approaches can be categorized into two kinds: scenario-based testing and functionality-based testing. We first discuss the shortcomings of these two kinds of approaches, and then propose a new testing framework to combine the benefits of them. Based on the new semantic diagram definition for the intelligence of autonomous vehicles, we explain how to design a task for autonomous vehicle testing and how to evaluate test results. Experiments show that this new approach provides a quantitative way to test the intelligence of an autonomous vehicle.",
"title": ""
},
{
"docid": "1ff9bf5a5a511a159cc1cc3623ad7f0a",
"text": "This paper illustrates the rectifier stress issue of the active clamped dual switch forward converters operating on discontinuous current mode (DCM), and analyzes the additional reverse voltage on the rectifier diode of active clamped dual switch forward converter at DCM operation, which does not appear in continuous current mode (CCM). The additional reverse voltage stress, plus its spikes, definitely causes many difficulties in designing high performance power supplies. In order to suppress this voltage spike to an acceptable level and improve the working conditions for the rectifier diode, this paper carefully explains and presents the working principles of active clamped dual switch forward converter in DCM operation, and theoretically analyzes the causes of the additional reverse voltage and its spikes. For conquering these difficulties, this paper also innovate active clamped snubber (ACS) cell to solve this issue. Furthermore, experiments on a 270W active clamped dual switch forward converter prototype were designed to validate the innovation. Finally, based on the similarities of the rectifier network in forward-topology based converters, this paper also extents the utility of this idea into even wider dc-dc converters.",
"title": ""
},
{
"docid": "267ee2186781941c1f9964afd07a956c",
"text": "Considerations in applying circuit breaker protection to DC systems are capacitive discharge, circuit breaker coordination and impacts of double ground faults. Test and analysis results show the potential for equipment damage. Solutions are proposed at the cost of increased integration between power conversion and protection systems.",
"title": ""
},
{
"docid": "84dee4781f7bc13711317d0594e97294",
"text": "We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2-orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders' MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.",
"title": ""
},
{
"docid": "f0532446a19fb2fa28a7a01cddca7e37",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "49f35f840566645f5b86e90ce0a932af",
"text": "Over the past decade, a number of tools and systems have been developed to manage various aspects of the software development lifecycle. Until now, tool supported code review, an important aspect of software development, has been largely ignored. With the advent of open source code review tools such as Gerrit along with projects that use them, code review data is now available for collection, analysis, and triangulation with other software development data. In this paper, we extract Android peer review data from Gerrit. We describe the Android peer review process, the reverse engineering of the Gerrit JSON API, our data mining and cleaning methodology, database schema, and provide an example of how the data can be used to answer an empirical software engineering question. The database is available for use by the research community.",
"title": ""
},
{
"docid": "9a4bdfe80a949ec1371a917585518ae4",
"text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.",
"title": ""
},
{
"docid": "d5eb643385b573706c48cbb2cb3262df",
"text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.",
"title": ""
},
{
"docid": "55158927c639ed62b53904b97a0f7a97",
"text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.",
"title": ""
}
] |
scidocsrr
|
6a85ae55305bb0c330a82457f5994f53
|
Control parameter optimization for a microgrid system using particle swarm optimization
|
[
{
"docid": "6af7f70f0c9b752d3dbbe701cb9ede2a",
"text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions",
"title": ""
},
{
"docid": "56b58efbeab10fa95e0f16ad5924b9e5",
"text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.",
"title": ""
},
{
"docid": "a5911891697a1b2a407f231cf0ad6c28",
"text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.",
"title": ""
}
] |
[
{
"docid": "ce7fdc16d6d909a4e0c3294ed55af51d",
"text": "In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.",
"title": ""
},
{
"docid": "ba19b5bc7aabecf8b8947cfa07b47237",
"text": "We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via l1 minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is O ( 1/s ) , where s is the sparsity level in each sample and the dictionary satisfies RIP. Combined with the recent results of approximate dictionary estimation, this yields provable guarantees for exact recovery of both the dictionary elements and the coefficients, when the dictionary elements are incoherent.",
"title": ""
},
{
"docid": "6db439b2753b9b6b8a298292410ca6f6",
"text": "MOTIVATION\nMost existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive.\n\n\nRESULTS\nComparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature.\n\n\nAVAILABILITY\nSource code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease.",
"title": ""
},
{
"docid": "b3f90386a9ef3bffeb618ab9304ee482",
"text": "The diagnosis process is often challenging, it involves the correlation of various pieces of information followed by several possible conclusions and iterations of diseases that may overload physicians when facing urgent cases that may lead to bad consequences threatening people's lives. The physician is asked to search for all symptoms related to a specific disease. To make this kind of search possible, there is a strong need for an effective way to store and retrieve medical knowledge from various datasets in order to find links between human disease and symptoms. For this purpose, we propose in this work a new Disease-Symptom Ontology (DS-Ontology). Utilizing existing biomedical ontologies, we integrate all available disease-symptom relationships to create a DS-Ontology that will be used latter in an ontology-based Clinical Decision Support System to determine a highly effective medical diagnosis.",
"title": ""
},
{
"docid": "d9fb3ab87d8050ec5957f9747dc1980d",
"text": "Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and slidingwindow based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78% in F-measure, which is significantly higher than previous methods.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "3ba87a9a84f317ef3fd97c79f86340c1",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "9409882dd0cf21ef9eddd7681811bd9f",
"text": "Recently, the Particle Swarm Optimization (PSO) technique has gained much attention in the field of time series forecasting. Although PSO trained Artificial Neural Networks (ANNs) performed reasonably well in stationary time series forecasting, their effectiveness in tracking the structure of non-stationary data (especially those which contain trends or seasonal patterns) is yet to be justified. In this paper, we have trained neural networks with two types of PSO (Trelea1 and Trelea2) for forecasting seasonal time series data. To assess their performances, experiments are conducted on three well-known real world seasonal time series. Obtained forecast errors in terms of three common performance measures, viz. MSE, MAE and MAPE for each dataset are compared with those obtained by the Seasonal ANN (SANN) model, trained with a standard backpropagation algorithm. Comparisons demonstrate that training with PSO-Trelea1 and PSO-Trelea2 produced significantly better results than the standard backpropagation rule.",
"title": ""
},
{
"docid": "1fb13cda340d685289f1863bb2bfd62b",
"text": "1 Assistant Professor, Department of Prosthodontics, Ibn-e-Siena Hospital and Research Institute, Multan Medical and Dental College, Multan, Pakistan 2 Assistant Professor, Department of Prosthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 3 Head Department of Prosthodontics, Armed Forces Institute of Dentistry, Rawalpindi, Pakistan For Correspondence: Dr Salman Ahmad, House No 10, Street No 2, Gulshan Sakhi Sultan Colony, Surej Miani Road, Multan, Pakistan. Email: drsalman21@gmail.com. Cell: 0300–8732017 INTRODUCTION",
"title": ""
},
{
"docid": "2a811ac141a9c5fb0cea4b644b406234",
"text": "Leadership is a process influence between leaders and subordinates where a leader attempts to influence the behaviour of subordinates to achieve the organizational goals. Organizational success in achieving its goals and objectives depends on the leaders of the organization and their leadership styles. By adopting the appropriate leadership styles, leaders can affect employee job satisfaction, commitment and productivity. Two hundred Malaysian executives working in public sectors voluntarily participated in this study. Two types of leadership styles, namely, transactional and transformational were found to have direct relationships with employees’ job satisfaction. The results showed that transformational leadership style has a stronger relationship with job satisfaction. This implies that transformational leadership is deemed suitable for managing government organizations. Implications of the findings were discussed further.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "066d3a381ffdb2492230bee14be56710",
"text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.",
"title": ""
},
{
"docid": "900a33dd42a9e55e1c00216a621daa33",
"text": "There is a current trend to support pet health through the addition of natural supplements to their diet, taking into account the high incidence of medical conditions related to their immune system and gastrointestinal tract. This study investigates effects of the plant Eleutherococcus senticosus as a dietary additive on faecal microbiota, faecal characteristics, blood serum biochemistry and selected parameters of cellular immunity in healthy dogs. A combination of the plant with the canine-derived probiotic strain Lactobacillus fermentum CCM 7421 was also evaluated. Thirty-two dogs were devided into 4 treatment groups; receiving no additive (control), dry root extract of E. senticosus (8 mg/kg of body weight), probiotic strain (108 CFU/mL, 0.1 mL/kg bw) and the combination of both additives. The trial lasted 49 days with 14 days supplementation period. Results confirm no antimicrobial effect of the plant on the probiotic abundance either in vitro (cultivation test) or in vivo. The numbers of clostridia, lactic acid bacteria and Gram-negative bacteria as well as the concentration of serum total protein, triglyceride, glucose and aspartate aminotransferase were significantly altered according to the treatment group. Leukocyte phagocytosis was significantly stimulated by the addition of probiotic while application of plant alone led to a significant decrease.",
"title": ""
},
{
"docid": "19ee4367e4047f45b60968e3374cae7a",
"text": "BACKGROUND\nFusion zones between superficial fascia and deep fascia have been recognized by surgical anatomists since 1938. Anatomical dissection performed by the author suggested that additional superficial fascia fusion zones exist.\n\n\nOBJECTIVES\nA study was performed to evaluate and define fusion zones between the superficial and the deep fascia.\n\n\nMETHODS\nDissection of fresh and minimally preserved cadavers was performed using the accepted technique for defining anatomic spaces: dye injection combined with cross-sectional anatomical dissection.\n\n\nRESULTS\nThis study identified bilaminar membranes traveling from deep to superficial fascia at consistent locations in all specimens. These membranes exist as fusion zones between superficial and deep fascia, and are referred to as SMAS fusion zones.\n\n\nCONCLUSIONS\nNerves, blood vessels and lymphatics transition between the deep and superficial fascia of the face by traveling along and within these membranes, a construct that provides stability and minimizes shear. Bilaminar subfascial membranes continue into the subcutaneous tissues as unilaminar septa on their way to skin. This three-dimensional lattice of interlocking horizontal, vertical, and oblique membranes defines the anatomic boundaries of the fascial spaces as well as the deep and superficial fat compartments of the face. This information facilitates accurate volume augmentation; helps to avoid facial nerve injury; and provides the conceptual basis for understanding jowls as a manifestation of enlargement of the buccal space that occurs with age.",
"title": ""
},
{
"docid": "9343521f74c244255ee6340b33947427",
"text": "Using a community sample of 192 adult women who had been sexually abused during childhood, the present study tested the hypothesis that perceived stigma, betrayal, powerlessness, and self-blame mediate the long-term effects of child sexual abuse. A path analysis indicated that the level of psychological distress currently experienced by adult women who had been sexually abused in childhood was mediated by feelings of stigma and self-blame. This result provides partial support for Finkelhor and Browne's (1985) traumagenic dynamics model of child sexual abuse. The limitations of the study are discussed.",
"title": ""
},
{
"docid": "166e615188d168d89fcd091871727344",
"text": "Two methods are analyzed for inertially stabilizing the pointing vector defining the line of sight (LOS) of a two-axis gimbaled laser tracker. Mounting the angular rate and acceleration sensors directly on the LOS axes is often used for precision pointing applications. This configuration impacts gimbal size, and the sensors must be capable of withstanding high angular slew rates. With the other stabilization method, sensors are mounted on the gimbal base, which alleviates some issues with the direct approach but may be less efficient, since disturbances are not measured in the LOS coordinate frame. This paper investigates the impact of LOS disturbances and sensor noise on the performance of each stabilization control loop configuration. It provides a detailed analysis of the mechanisms by which disturbances are coupled to the LOS track vector for each approach, and describes the advantages and disadvantages of each. It concludes with a performance comparison based upon simulated sensor noise and three sets of platform disturbance inputs ranging from mild to harsh disturbance environments.",
"title": ""
},
{
"docid": "40cd4d0863ed757709530af59e928e3b",
"text": "Kynurenic acid (KYNA) is an endogenous antagonist of ionotropic glutamate receptors and the α7 nicotinic acetylcholine receptor, showing anticonvulsant and neuroprotective activity. In this study, the presence of KYNA in food and honeybee products was investigated. KYNA was found in all 37 tested samples of food and honeybee products. The highest concentration of KYNA was obtained from honeybee products’ samples, propolis (9.6 nmol/g), honey (1.0–4.8 nmol/g) and bee pollen (3.4 nmol/g). A high concentration was detected in fresh broccoli (2.2 nmol/g) and potato (0.7 nmol/g). Only traces of KYNA were found in some commercial baby products. KYNA administered intragastrically in rats was absorbed from the intestine into the blood stream and transported to the liver and to the kidney. In conclusion, we provide evidence that KYNA is a constituent of food and that it can be easily absorbed from the digestive system.",
"title": ""
},
{
"docid": "232891b57ea0ca1852fbe3e63157db26",
"text": "With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of securitycritical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.",
"title": ""
},
{
"docid": "5fde7006ec6f7cf4f945b234157e5791",
"text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"title": ""
}
] |
scidocsrr
|
6a59e3be455be852d3d04e46ad6f3d1d
|
IEEE 802.15.4 security sublayer for OMNET++
|
[
{
"docid": "beec3b6b4e5ecaa05d6436426a6d93b7",
"text": "This paper introduces a 6LoWPAN simulation model for OMNeT++. Providing a 6LoWPAN model is an important step to advance OMNeT++-based Internet of Things simulations. We integrated Contiki’s 6LoWPAN implementation into OMNeT++ in order to avoid problems of non-standard compliant, non-interoperable, or highly abstracted and thus unreliable simulation models. The paper covers the model’s structure as well as its integration and the generic interaction between OMNeT++ / INET and Contiki.",
"title": ""
}
] |
[
{
"docid": "3c08e42ad9e6a2f2e7a29a187d8a791e",
"text": "An integrated single-inductor dual-output boost converter is presented. This converter adopts time-multiplexing control in providing two independent supply voltages (3.0 and 3.6 V) using only one 1H off-chip inductor and a single control loop. This converter is analyzed and compared with existing counterparts in the aspects of integration, architecture, control scheme, and system stability. Implementation of the power stage, the controller, and the peripheral functional blocks is discussed. The design was fabricated with a standard 0.5m CMOS n-well process. At an oscillator frequency of 1 MHz, the power conversion efficiency reaches 88.4% at a total output power of 350 mW. This topology can be extended to have multiple outputs and can be applied to buck, flyback, and other kinds of converters.",
"title": ""
},
{
"docid": "a9612aacde205be2d753c5119b9d95d3",
"text": "We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.",
"title": ""
},
{
"docid": "ccce778a661b2f4a1689da1ac190b2a6",
"text": "Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architecture, so that a path from the root to a leaf node defines a sequence of transformations. Instead of considering global transformations, like in classical multilayer networks, this model allows us for learning a set of local transformations. It is thus able to process data with different characteristics through specific sequences of such local transformations, increasing the expression power of this model w.r.t a classical multilayered network. The learning algorithm is inspired from policy gradient techniques coming from the reinforcement learning domain and is used here instead of the classical back-propagation based gradient descent techniques. Experiments on different datasets show the relevance of this approach.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "4d75db0597f4ca4d4a3abba398e99cb4",
"text": "Coverage path planning determines a path that guides an autonomous vehicle to pass every part of a workspace completely and efficiently. Since turns are often costly for autonomous vehicles, minimizing the number of turns usually produces more working efficiency. This paper presents an optimization approach to minimize the number of turns of autonomous vehicles in coverage path planning. For complex polygonal fields, the problem is reduced to finding the optimal decomposition of the original field into simple subfields. The optimization criterion is minimization of the sum of widths of these decomposed subfields. Here, a new algorithm is designed based on a multiple sweep line decomposition. The time complexity of the proposed algorithm is O(n2 log n). Experiments show that the proposed algorithm can provide nearly optimal solutions very efficiently when compared against recent state-of-the-art. The proposed algorithm can be applied for both convex and non-convex fields.",
"title": ""
},
{
"docid": "83f3c9f161b9871b59376ba4d415ebcc",
"text": "Much work has been done in understanding human creativity and defining measures to evaluate creativity. This is necessary mainly for the reason of having an objective and automatic way of quantifying creative artifacts. In this work, we propose a regression-based learning framework which takes into account quantitatively the essential criteria for creativity like novelty, influence, value and unexpectedness. As it is often the case with most creative domains, there is no clear ground truth available for creativity. Our proposed learning framework is applicable to all creative domains; yet we evaluate it on a dataset of movies created from IMDb and Rotten Tomatoes due to availability of audience and critic scores, which can be used as proxy ground truth labels for creativity. We report promising results and observations from our experiments in the following ways : 1) Correlation of creative criteria with critic scores, 2) Improvement in movie rating prediction with inclusion of various creative criteria, and 3) Identification of creative movies.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "fe3afe69ec27189400e65e8bdfc5bf0b",
"text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.",
"title": ""
},
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
},
{
"docid": "aadc952471ecd67d0c0731fa5a375872",
"text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.",
"title": ""
},
{
"docid": "189709296668a8dd6f7be8e1b2f2e40f",
"text": "Uncertain data management, querying and mining have become important because the majority of real world data is accompanied with uncertainty these days. Uncertainty in data is often caused by the deficiency in underlying data collecting equipments or sometimes manually introduced to preserve data privacy. This work discusses the problem of distance-based outlier detection on uncertain datasets of Gaussian distribution. The Naive approach of distance-based outlier on uncertain data is usually infeasible due to expensive distance function. Therefore a cell-based approach is proposed in this work to quickly identify the outliers. The infinite nature of Gaussian distribution prevents to devise effective pruning techniques. Therefore an approximate approach using bounded Gaussian distribution is also proposed. Approximating Gaussian distribution by bounded Gaussian distribution enables an approximate but more efficient cell-based outlier detection approach. An extensive empirical study on synthetic and real datasets show that our proposed approaches are effective, efficient and scalable.",
"title": ""
},
{
"docid": "c7e3fc9562a02818bba80d250241511d",
"text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.",
"title": ""
},
{
"docid": "4f7b6ad29f8a6cbe871ed5a6a9e75896",
"text": "Copyright: © 2017. The Author(s). Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. Introduction Glaucoma is an optic neuropathy that sometimes results in irreversible blindness.1 After cataracts, glaucoma is the second most prevalent cause of global blindness,2 and it is estimated that almost 80 million people worldwide will be affected by this optic neuropathy by the year 2020.3 Because of the high prevalence of this ocular disease, the economic and social implications of glaucoma have been outlined in recent studies.4,5 In Africa, primary open-angle glaucoma (POAG) is more prevalent than primary-angle closure glaucoma, and over the next 4 years, the prevalence of POAG in Africa is projected to increase by 23% corresponding to an increase from 6.2 million to 8.0 million affected individuals.3 Consequently, in Africa, there have been recommendations to incorporate glaucoma screening procedures into routine eye examinations as well as implement glaucoma blindness control programs.6,7",
"title": ""
},
{
"docid": "cd59460d293aa7ecbb9d7b96ed451b9a",
"text": "PURPOSE\nThe prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees.\n\n\nMETHODS\nA large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model.\n\n\nRESULTS\nSignificant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort.\n\n\nCONCLUSION\nThis study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.",
"title": ""
},
{
"docid": "f2395e705e84548186a57b2a199c1ddd",
"text": "Full-duplex technology is likely to be adopted in various legacy communications standards. The IEEE 802.11ax Working Group has been considering a simultaneous transmit and receive (STR) mode for the next generation WLANs. Enabling STR mode (FD communication mode) in 802.11 networks creates bidirectional FD (BFD) and unidirectional FD (UFD) links. The key challenge is to integrate STR mode with minimal protocol modifications, while considering the coexistence of FD and legacy half-duplex STAs and backward compatibility. This article proposes a simple and practical approach to enable STR mode in 802.11 networks with coexisting FD and HD STAs. The protocol explicitly accounts for the peculiarities of FD environments and backward compatibility. Key aspects of the proposed solution include FD capability discovery, a handshake mechanism for channel access, node selection for UFD transmission, adaptive ACK timeout for STAs engaged in BFD or UFD transmission, and mitigation of contention unfairness. Performance evaluation demonstrates the effectiveness of the proposed solution in realizing the gains of FD technology for next generation WLANs.",
"title": ""
},
{
"docid": "6cc203d16e715cbd71efdeca380f3661",
"text": "PURPOSE\nTo determine a population-based estimate of communication disorders (CDs) in children; the co-occurrence of intellectual disability (ID), autism, and emotional/behavioral disorders; and the impact of these conditions on the prevalence of CDs.\n\n\nMETHOD\nSurveillance targeted 8-year-olds born in 1994 residing in 2002 in the 3 most populous counties in Utah (n = 26,315). A multiple-source record review was conducted at all major health and educational facilities.\n\n\nRESULTS\nA total of 1,667 children met the criteria of CD. The prevalence of CD was estimated to be 63.4 per 1,000 8-year-olds (95% confidence interval = 60.4-66.2). The ratio of boys to girls was 1.8:1. Four percent of the CD cases were identified with an ID and 3.7% with autism spectrum disorders (ASD). Adjusting the CD prevalence to exclude ASD and/or ID cases significantly affected the CD prevalence rate. Other frequently co-occurring emotional/behavioral disorders with CD were attention deficit/hyperactivity disorder, anxiety, and conduct disorder.\n\n\nCONCLUSIONS\nFindings affirm that CDs and co-occurring mental health conditions are a major educational and public health concern.",
"title": ""
},
{
"docid": "951213cd4412570709fb34f437a05c72",
"text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.",
"title": ""
},
{
"docid": "d88e4d9bba66581be16c9bd59d852a66",
"text": "After five decades characterized by empiricism and several pitfalls, some of the basic mechanisms of action of ozone in pulmonary toxicology and in medicine have been clarified. The present knowledge allows to understand the prolonged inhalation of ozone can be very deleterious first for the lungs and successively for the whole organism. On the other hand, a small ozone dose well calibrated against the potent antioxidant capacity of blood can trigger several useful biochemical mechanisms and reactivate the antioxidant system. In detail, firstly ex vivo and second during the infusion of ozonated blood into the donor, the ozone therapy approach involves blood cells and the endothelium, which by transferring the ozone messengers to billions of cells will generate a therapeutic effect. Thus, in spite of a common prejudice, single ozone doses can be therapeutically used in selected human diseases without any toxicity or side effects. Moreover, the versatility and amplitude of beneficial effect of ozone applications have become evident in orthopedics, cutaneous, and mucosal infections as well as in dentistry.",
"title": ""
},
{
"docid": "ee4416a05b955cdbd83b1819f0152665",
"text": "relative densities of pharmaceutical solids play an important role in determining their performance (e.g., flow and compaction properties) in both tablet and capsule dosage forms. In this article, the authors report the densities of a wide variety of solid pharmaceutical formulations and intermediates. The variance of density with chemical structure, processing history, and dosage-form type is significant. This study shows that density can be used as an equipment-independent scaling parameter for several common drug-product manufacturing operations. any physical responses of powders, granules, and compacts such as powder flow and tensile strength are determined largely by their absolute and relative densities (1–8). Although measuring these properties is a simple task, a review of the literature reveals that a combined source of density data that formulation scientists can refer to does not exist. The purpose of this article is to provide such a reference source and to give insight about how these critical properties can be measured for common pharmaceutical solids and how they can be used for monitoring common drugproduct manufacturing operations.",
"title": ""
},
{
"docid": "2a0b81bbe867a5936dafc323d8563970",
"text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.",
"title": ""
}
] |
scidocsrr
|
0b4c67f00e1c7b55abfc05e06205b37a
|
Universal Transformers
|
[
{
"docid": "2a6aa350dd7ddc663aaaafe4d745845e",
"text": "Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows — limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs 1,000⇥ faster and with 3,000⇥ less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "87a11f6097cb853b7c98e17cdf97801e",
"text": "Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/",
"title": ""
},
{
"docid": "98be2f8b10c618f9d2fc8183f289c739",
"text": "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"title": ""
}
] |
[
{
"docid": "9242d2e212cc20a6e921228bf090c130",
"text": "This paper includes two contributions. First, it proves that the series and shunt radiation components, corresponding to longitudinal and transversal electric fields, respectively, are always in phase quadrature in axially asymmetric periodic leaky-wave antennas (LWAs), so that these antennas are inherently elliptically polarized. This fact is theoretically proven and experimentally illustrated by two case-study examples, a composite right/left-handed (CRLH) LWA and a series-fed patch (SFP) LWA. Second, it shows (for the case of the SFP LWA) that the axial ratio is controlled and minimized by the degree of axial asymmetry.",
"title": ""
},
{
"docid": "afaed9813ab63d0f5a23648a1e0efadb",
"text": "We proposed novel airway segmentation methods in volumetric chest computed tomography (CT) using 2.5D convolutional neural net (CNN) and 3D CNN. A method with 2.5D CNN segments airways by voxel-by-voxel classification based on patches which are from three adjacent slices in each of the orthogonal directions including axial, sagittal, and coronal slices around each voxel, while 3D CNN segments by 3D patch-based semantic segmentation using modified 3D U-Net. The extra-validation of our proposed method was demonstrated in 20 test datasets of the EXACT’09 challenge. The detected tree length and the false positive rate was 60.1%, 4.56% for 2.5D CNN and 61.6%, 3.15% for 3D CNN. Our fully automated (end-to-end) segmentation method could be applied in radiological practice.",
"title": ""
},
{
"docid": "a066ff1b4dfa65a67b79200366021542",
"text": "OBJECTIVES\nWe sought to assess the shave biopsy technique, which is a new surgical procedure for complete removal of longitudinal melanonychia. We evaluated the quality of the specimen submitted for pathological examination, assessed the postoperative outcome, and ascertained its indication between the other types of matrix biopsies.\n\n\nDESIGN\nThis was a retrospective study performed at the dermatologic departments of the Universities of Liège and Brussels, Belgium, of 30 patients with longitudinal or total melanonychia.\n\n\nRESULTS\nPathological diagnosis was made in all cases; 23 patients were followed up during a period of 6 to 40 months. Seventeen patients had no postoperative nail plate dystrophy (74%) but 16 patients had recurrence of pigmentation (70%).\n\n\nLIMITATIONS\nThis was a retrospective study.\n\n\nCONCLUSIONS\nShave biopsy is an effective technique for dealing with nail matrix lesions that cause longitudinal melanonychia over 4 mm wide. Recurrence of pigmentation is the main drawback of the procedure.",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "892661d87138d49aab2a54b7557a7021",
"text": "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the CaltechUCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"title": ""
},
{
"docid": "a45b4d0237fdcfedf973ec639b1a1a36",
"text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.",
"title": ""
},
{
"docid": "f3b4a9b49a34d56c32589cee14e6b900",
"text": "The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-alpha and an on-board 2D laser scanner. _____________________________________________________________________________________ This document is a PREPRINT. The published version of the article is available in: Journal of Field Robotics, 23: 21–34. doi: 10.1002/rob.20104; http://dx.doi.org/10.1002/rob.20104.",
"title": ""
},
{
"docid": "b0087e2afdf5a1abc5046782279529a5",
"text": "The rapid development of Community Question Answering (CQA) satisfies users’ quest for professional and personal knowledge about anything. In CQA, one central issue is to find users with expertise and willingness to answer the given questions. Expert finding in CQA often exhibits very different challenges compared to traditional methods. The new features of CQA (such as huge volume, sparse data and crowdsourcing) violate fundamental assumptions of traditional recommendation systems. This paper focuses on reviewing and categorizing the current progress on expert finding in CQA. We classify the recent solutions into four different categories: matrix factorization based models (MF-based models), gradient boosting tree based models (GBT-based models), deep learning based models (DL-based models) and ranking based models (R-based models). We find that MF-based models outperform other categories of models in the crowdsourcing situation. Moreover, we use innovative diagrams to clarify several important concepts of ensemble learning, and find that ensemble models with several specific single models can further boost the performance. Further, we compare the performance of different models on different types of matching tasks, including text vs. text, graph vs. text, audio vs. text and video vs. text. The results will help the model selection of expert finding in practice. Finally, we explore some potential future issues in expert finding research in CQA.",
"title": ""
},
{
"docid": "3133829dd980cc1b428d80890cded347",
"text": "Finger vein images are rich in orientation and edge features. Inspired by the edge histogram descriptor proposed in MPEG-7, this paper presents an efficient orientation-based local descriptor, named histogram of salient edge orientation map (HSEOM). HSEOM is based on the fact that human vision is sensitive to edge features for image perception. For a given image, HSEOM first finds oriented edge maps according to predefined orientations using a well-known edge operator and obtains a salient edge orientation map by choosing an orientation with the maximum edge magnitude for each pixel. Then, subhistograms of the salient edge orientation map are generated from the nonoverlapping submaps and concatenated to build the final HSEOM. In the experiment of this paper, eight oriented edge maps were used to generate a salient edge orientation map for HSEOM construction. Experimental results on our available finger vein image database, MMCBNU_6000, show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code). Furthermore, the proposed HSEOM has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.",
"title": ""
},
{
"docid": "5e1f035df9a6f943c5632078831f5040",
"text": "Animacy is a necessary property for a referent to be an agent, and thus animacy detection is useful for a variety of natural language processing tasks, including word sense disambiguation, co-reference resolution, semantic role labeling, and others. Prior work treated animacy as a word-level property, and has developed statistical classifiers to classify words as either animate or inanimate. We discuss why this approach to the problem is ill-posed, and present a new approach based on classifying the animacy of co-reference chains. We show that simple voting approaches to inferring the animacy of a chain from its constituent words perform relatively poorly, and then present a hybrid system merging supervised machine learning (ML) and a small number of handbuilt rules to compute the animacy of referring expressions and co-reference chains. This method achieves state of the art performance. The supervised ML component leverages features such as word embeddings over referring expressions, parts of speech, and grammatical and semantic roles. The rules take into consideration parts of speech and the hypernymy structure encoded in WordNet. The system achieves an F1 of 0.88 for classifying the animacy of referring expressions, which is comparable to state of the art results for classifying the animacy of words, and achieves an F1 of 0.75 for classifying the animacy of coreference chains themselves. We release our training and test dataset, which includes 142 texts (all narratives) comprising 156,154 words, 34,698 referring expressions, and 10,941 co-reference chains. We test the method on a subset of the OntoNotes dataset, showing using manual sampling that animacy classification is 90%±2% accurate for coreference chains, and 92%±1% for referring expressions. The data also contains 46 folktales, which present an interesting challenge because they often involve characters who are members of traditionally inanimate classes (e.g., stoves that walk, trees that talk). We show that our system is able to detect the animacy of these unusual referents with an F1 of 0.95.",
"title": ""
},
{
"docid": "ddaa9d109273684f694c698f5261db9e",
"text": "Multiprocessor architectures and platforms have been introduced to extend the applicability of Moore’s law. They depend on concurrency and synchronization in both software and hardware to enhance the design productivity and system performance. These platforms will also have to incorporate highly scalable, reusable, predictable, costand energy-efficient architectures. With the rapidly approaching billion transistors era, some of the main problems in deep sub-micron technologies which are characterized by gate lengths in the range of 60-90 nm, will arise from non-scalable wire delays, errors in signal integrity and unsynchronized communications. These problems may be overcome by the use of Network on Chip (NOC) architecture. In this paper, we have summarized over sixty research papers and contributions in NOC area.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9584d194e05359ef5123c6b3d71e1c75",
"text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.",
"title": ""
},
{
"docid": "e9326cb2e3b79a71d9e99105f0259c5a",
"text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "7f848facaa535d53e7a6fe7aa2435473",
"text": "The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a \"pyramid,\" which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.",
"title": ""
},
{
"docid": "0b191398f6458d8516ff65c74550bd68",
"text": "It is now recognized that gut microbiota contributes indispensable roles in safeguarding host health. Shrimp is being threatened by newly emerging diseases globally; thus, understanding the driving factors that govern its gut microbiota would facilitate an initial step to reestablish and maintain a “healthy” gut microbiota. This review summarizes the factors that assemble the shrimp gut microbiota, which focuses on the current progresses of knowledge linking the gut microbiota and shrimp health status. In particular, I propose the exploration of shrimp disease pathogenesis and incidence based on the interplay between dysbiosis in the gut microbiota and disease severity. An updated research on shrimp disease toward an ecological perspective is discussed, including host–bacterial colonization, identification of polymicrobial pathogens and diagnosing disease incidence. Further, a simple conceptual model is offered to summarize the interplay among the gut microbiota, external factors, and shrimp disease. Finally, based on the review, current limitations are raised and future studies directed at solving these concerns are proposed. This review is timely given the increased interest in the role of gut microbiota in disease pathogenesis and the advent of novel diagnosis strategies.",
"title": ""
},
{
"docid": "199079ff97d1a48819f8185c2ef23472",
"text": "Identifying domain-dependent opinion words is a key problem in opinion mining and has been studied by several researchers. However, existing work has been focused on adjectives and to some extent verbs. Limited work has been done on nouns and noun phrases. In our work, we used the feature-based opinion mining model, and we found that in some domains nouns and noun phrases that indicate product features may also imply opinions. In many such cases, these nouns are not subjective but objective. Their involved sentences are also objective sentences and imply positive or negative opinions. Identifying such nouns and noun phrases and their polarities is very challenging but critical for effective opinion mining in these domains. To the best of our knowledge, this problem has not been studied in the literature. This paper proposes a method to deal with the problem. Experimental results based on real-life datasets show promising results.",
"title": ""
},
{
"docid": "8e03f4410676fb4285596960880263e9",
"text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.",
"title": ""
}
] |
scidocsrr
|
5004442e422d51a134d3efc6492c3189
|
Security in Automotive Networks: Lightweight Authentication and Authorization
|
[
{
"docid": "3f8e4ddfe56737508ec2222d110291fc",
"text": "We present a new verification algorithm for security protocols that allows for unbounded verification, falsification, and complete characterization. The algorithm provides a number of novel features, including: (1) Guaranteed termination, after which the result is either unbounded correctness, falsification, or bounded correctness. (2) Efficient generation of a finite representation of an infinite set of traces in terms of patterns, also known as a complete characterization. (3) State-of-the-art performance, which has made new types of protocol analysis feasible, such as multi-protocol analysis.",
"title": ""
}
] |
[
{
"docid": "e28feb56ebc33a54d13452a2ea3a49f7",
"text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona pyan@email.arizona.edu; {hchen, zeng}@eller.arizona.edu",
"title": ""
},
{
"docid": "470ecc2bc4299d913125d307c20dd48d",
"text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.",
"title": ""
},
{
"docid": "0f4d91623a7b9893d24c9dc9354f3dce",
"text": "We derive experimentally based estimates of the energy used by neural mechanisms to code known quantities of information. Biophysical measurements from cells in the blowfly retina yield estimates of the ATP required to generate graded (analog) electrical signals that transmit known amounts of information. Energy consumption is several orders of magnitude greater than the thermodynamic minimum. It costs 104 ATP molecules to transmit a bit at a chemical synapse, and 106 - 107 ATP for graded signals in an interneuron or a photoreceptor, or for spike coding. Therefore, in noise-limited signaling systems, a weak pathway of low capacity transmits information more economically, which promotes the distribution of information among multiple pathways.",
"title": ""
},
{
"docid": "1597874bef5c515e038584b3bf72f148",
"text": "This paper presents an overview of Text Summarization. Text Summarization is a challenging problem these days. Due to the great amount of information we are provided with and thanks to the development of Internet technologies, needs of producing summaries have become more and more widespread. Summarization is a very interesting and useful task that gives support to many other tasks as well as it takes advantage of the techniques developed for related Natural Language Processing tasks. The paper we present here may help us to have an idea of what Text Summarization is and how it can be useful for.",
"title": ""
},
{
"docid": "9237b82f1d127ab59a1a5e8f9fa7f86c",
"text": "Purpose: Enterprise social media platforms provide new ways of sharing knowledge and communicating within organizations to benefit from the social capital and valuable knowledge that employees have. Drawing on social dilemma and self‐determination theory, the aim of the study is to understand what factors drive employees’ participation and what factors hamper their participation in enterprise social media. Methodology: Based on a literature review, a unified research model is derived integrating demographic, individual, organizational and technological factors that influence the motivation of employees to share knowledge. The model is tested using statistical methods on a sample of 114 respondents in Denmark. Qualitative data is used to elaborate and explain quantitative results‘ findings. Practical implications: The proposed knowledge sharing framework helps to understand what factors impact engagement on social media. Furthermore the article suggests different types of interventions to overcome the social dilemma of knowledge sharing. Findings: Our findings pinpoint towards the general drivers and barriers to knowledge sharing within organizations. The significant drivers are: enjoy helping others, monetary rewards, management support, change of knowledge sharing behavior and recognition. The significant identified barriers to knowledge sharing are: change of behavior, lack of trust and lack of time. Originality: The study contributes to an understanding of factors leading to the success or failure of enterprise social media drawing on self‐determination and social dilemma theory.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu",
"title": ""
},
{
"docid": "2e99e535f2605e88571407142e4927ee",
"text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.",
"title": ""
},
{
"docid": "717ea3390ffe3f3132d4e2230e645ee5",
"text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.",
"title": ""
},
{
"docid": "f554af0d260de70f6efbc8fe8d64a357",
"text": "Hypocretin deficiency causes narcolepsy and may affect neuroendocrine systems and body composition. Additionally, growth hormone (GH) alterations my influence weight in narcolepsy. Symptoms can be treated effectively with sodium oxybate (SXB; γ-hydroxybutyrate) in many patients. This study compared growth hormone secretion in patients and matched controls and established the effect of SXB administration on GH and sleep in both groups. Eight male hypocretin-deficient patients with narcolepsy and cataplexy and eight controls matched for sex, age, BMI, waist-to-hip ratio, and fat percentage were enrolled. Blood was sampled before and on the 5th day of SXB administration. SXB was taken two times 3 g/night for 5 consecutive nights. Both groups underwent 24-h blood sampling at 10-min intervals for measurement of GH concentrations. The GH concentration time series were analyzed with AutoDecon and approximate entropy (ApEn). Basal and pulsatile GH secretion, pulse regularity, and frequency, as well as ApEn values, were similar in patients and controls. Administration of SXB caused a significant increase in total 24-h GH secretion rate in narcolepsy patients, but not in controls. After SXB, slow-wave sleep (SWS) and, importantly, the cross-correlation between GH levels and SWS more than doubled in both groups. In conclusion, SXB leads to a consistent increase in nocturnal GH secretion and strengthens the temporal relation between GH secretion and SWS. These data suggest that SXB may alter somatotropic tone in addition to its consolidating effect on nighttime sleep in narcolepsy. This could explain the suggested nonsleep effects of SXB, including body weight reduction.",
"title": ""
},
{
"docid": "690a2b067af8810d5da7d3389b7b4d78",
"text": "Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 3314,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum `1 adversarial distortion of a ReLU network with a 0.99 lnn approximation ratio unless NP=P, where n is the number of neurons in the network. Equal contribution Massachusetts Institute of Technology, Cambridge, MA UC Davis, Davis, CA Harvard University, Cambridge, MA UT Austin, Austin, TX. Source code is available at https://github.com/huanzhang12/CertifiedReLURobustness. Correspondence to: Tsui-Wei Weng <twweng@mit.edu>, Huan Zhang <huan@huan-zhang.com>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "4e23bf1c89373abaf5dc096f76c893f3",
"text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.",
"title": ""
},
{
"docid": "feeeb7bd9ed07917048cfd6bf0c3c6c7",
"text": "Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of crossdomain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domainspecific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-ofthe-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.",
"title": ""
},
{
"docid": "b04ae75e4f444b97976962a397ac413c",
"text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.",
"title": ""
},
{
"docid": "0b1310ac9630fa4a1c90dcf90d4ae327",
"text": "The Mirai Distributed Denial-of-Service (DDoS) attack exploited security vulnerabilities of Internet-of-Things (IoT) devices and thereby clearly signaled that attackers have IoT on their radar. Securing IoT is therefore imperative, but in order to do so it is crucial to understand the strategies of such attackers. For that purpose, in this paper, a novel IoT honeypot called ThingPot is proposed and deployed. Honeypot technology mimics devices that might be exploited by attackers and logs their behavior to detect and analyze the used attack vectors. ThingPot is the first of its kind, since it focuses not only on the IoT application protocols themselves, but on the whole IoT platform. A Proof-of-Concept is implemented with XMPP and a REST API, to mimic a Philips Hue smart lighting system. ThingPot has been deployed for 1.5 months and through the captured data we have found five types of attacks and attack vectors against smart devices. The ThingPot source code is made available as open source.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk",
"title": ""
},
{
"docid": "a3308e4df796a74112b70c3244bd4d34",
"text": "Creative insight occurs with an “Aha!” experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21–69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.",
"title": ""
},
{
"docid": "a496f2683f49573132e5b57f7e3accf0",
"text": "Automatically generated databases of English paraphrases have the drawback that they return a single list of paraphrases for an input word or phrase. This means that all senses of polysemous words are grouped together, unlike WordNet which partitions different senses into separate synsets. We present a new method for clustering paraphrases by word sense, and apply it to the Paraphrase Database (PPDB). We investigate the performance of hierarchical and spectral clustering algorithms, and systematically explore different ways of defining the similarity matrix that they use as input. Our method produces sense clusters that are qualitatively and quantitatively good, and that represent a substantial improvement to the PPDB resource.",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
}
] |
scidocsrr
|
fee603c991c0c156680cebf16071485b
|
Classifiers as a model-free group comparison test.
|
[
{
"docid": "410a76670a57db5be2cc5a7a3d10918c",
"text": "Machine learning and pattern recognition algorithms have in the past years developed to become a working horse in brain imaging and the computational neurosciences, as they are instrumental for mining vast amounts of neural data of ever increasing measurement precision and detecting minuscule signals from an overwhelming noise floor. They provide the means to decode and characterize task relevant brain states and to distinguish them from non-informative brain signals. While undoubtedly this machinery has helped to gain novel biological insights, it also holds the danger of potential unintentional abuse. Ideally machine learning techniques should be usable for any non-expert, however, unfortunately they are typically not. Overfitting and other pitfalls may occur and lead to spurious and nonsensical interpretation. The goal of this review is therefore to provide an accessible and clear introduction to the strengths and also the inherent dangers of machine learning usage in the neurosciences.",
"title": ""
}
] |
[
{
"docid": "d87edfb603b5d69bcd0e0dc972d26991",
"text": "The adult nervous system is not static, but instead can change, can be reshaped by experience. Such plasticity has been demonstrated from the most reductive to the most integrated levels, and understanding the bases of this plasticity is a major challenge. It is apparent that stress can alter plasticity in the nervous system, particularly in the limbic system. This paper reviews that subject, concentrating on: a) the ability of severe and/or prolonged stress to impair hippocampal-dependent explicit learning and the plasticity that underlies it; b) the ability of mild and transient stress to facilitate such plasticity; c) the ability of a range of stressors to enhance implicit fear conditioning, and to enhance the amygdaloid plasticity that underlies it.",
"title": ""
},
{
"docid": "ee01fcf12aab8e06c1924d1bb073b16d",
"text": "In this paper, a resampling ensemble algorithm is developed focused on the classification problems for imbalanced datasets. In this method, the small classes are oversampled and large classes are undersampled. The resampling scale is determined by the ratio of the minimum number of class and maximum number of class. Oversampling for “small” classes is done by MWMOTE technique and undersampling for “large” classes is performed according to SSO technique. Our aim is to reduce the time complexity as well as the enhancement of accuracy rate of classification result. Keywords—Imbalanced classification, Resampling algorithm, SMOTE, MWMOTE, SSO. _________________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "1b5b6c4a82436b6dcbf984a199c68b5d",
"text": "Online fashion sales present a challenging use case for personalized recommendation: Stores offer a huge variety of items in multiple sizes. Small stocks, high return rates, seasonality, and changing trends cause continuous turnover of articles for sale on all time scales. Customers tend to shop rarely, but often buy multiple items at once. We report on backtest experiments with sales data of 100k frequent shoppers at Zalando, Europe’s leading online fashion platform. To model changing customer and store environments, our recommendation method employs a pair of neural networks: To overcome the cold start problem, a feedforward network generates article embeddings in “fashion space,” which serve as input to a recurrent neural network that predicts a style vector in this space for each client, based on their past purchase sequence. We compare our results with a static collaborative filtering approach, and a popularity ranking baseline.",
"title": ""
},
{
"docid": "9dd66d538b0195b216c10cc47d3f7005",
"text": "This study presents a stochastic demand multi-product supplier selection model with service level and budget constraints using Genetic Algorithm. Recently, much attention has been given to stochastic demand due to uncertainty in the real world. Conflicting objectives also exist between profit, service level and resource utilization. In this study, the relationship between the expected profit and the number of trials as well as between the expected profit and the combination of mutation and crossover rates are investigated to identify better parameter values to efficiently run the Genetic Algorithm. Pareto optimal solutions and return on investment are analyzed to provide decision makers with the alternative options of achieving the proper budget and service level. The results show that the optimal value for the return on investment and the expected profit are obtained with a certain budget and service level constraint. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "be3fa2fbaaa362aace36d112ff09f94d",
"text": "One of the key objectives in accident data analysis to identify the main factors associated with a road and traffic accident. However, heterogeneous nature of road accident data makes the analysis task difficult. Data segmentation has been used widely to overcome this heterogeneity of the accident data. In this paper, we proposed a framework that used K-modes clustering technique as a preliminary task for segmentation of 11,574 road accidents on road network of Dehradun (India) between 2009 and 2014 (both included). Next, association rule mining are used to identify the various circumstances that are associated with the occurrence of an accident for both the entire data set (EDS) and the clusters identified by K-modes clustering algorithm. The findings of cluster based analysis and entire data set analysis are then compared. The results reveal that the combination of k mode clustering and association rule mining is very inspiring as it produces important information that would remain hidden if no segmentation has been performed prior to generate association rules. Further a trend analysis have also been performed for each clusters and EDS accidents which finds different trends in different cluster whereas a positive trend is shown by EDS. Trend analysis also shows that prior segmentation of accident data is very important before analysis.",
"title": ""
},
{
"docid": "61e7b3c7de15f87ed86ffb355d1b126c",
"text": "Temporal action detection is a very important yet challenging problem, since videos in real applications are usually long, untrimmed and contain multiple action instances. This problem requires not only recognizing action categories but also detecting start time and end time of each action instance. Many state-of-the-art methods adopt the \"detection by classification\" framework: first do proposal, and then classify proposals. The main drawback of this framework is that the boundaries of action instance proposals have been fixed during the classification step. To address this issue, we propose a novel Single Shot Action Detector (SSAD) network based on 1D temporal convolutional layers to skip the proposal generation step via directly detecting action instances in untrimmed video. On pursuit of designing a particular SSAD network that can work effectively for temporal action detection, we empirically search for the best network architecture of SSAD due to lacking existing models that can be directly adopted. Moreover, we investigate into input feature types and fusion strategies to further improve detection accuracy. We conduct extensive experiments on two challenging datasets: THUMOS 2014 and MEXaction2. When setting Intersection-over-Union threshold to 0.5 during evaluation, SSAD significantly outperforms other state-of-the-art systems by increasing mAP from $19.0%$ to $24.6%$ on THUMOS 2014 and from 7.4% to $11.0%$ on MEXaction2.",
"title": ""
},
{
"docid": "36347412c7d30ae6fde3742bbc4f21b9",
"text": "iii",
"title": ""
},
{
"docid": "aa2e16e6ed5d2610a567e358807834d4",
"text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.",
"title": ""
},
{
"docid": "60a3538ec6a64af6f8fd447ed0fb79f5",
"text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
},
{
"docid": "bc11f3de3037b0098a6c313d879ae696",
"text": "The study of polygon meshes is a large sub-field of computer graphics and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include boolean logic, smoothing, simplification, and many others. 2.3.1 What is a mesh? A mesh is a collection of polygonal facets targeting to constitute an appropriate approximation of a real 3D object. It possesses three different combinatorial elements: vertices, edges and facets. From another viewpoint, a mesh can also be completely described by two kinds of information. The geometry information gives essentially the positions (coordinates) of all its vertices, while the connectivity information provides the adjacency relations between the different elements. 2.3.2 An example of 3D meshes As we can see in the Fig. 2.3, the facets usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes. The degree of a facet is the number of its component edges, and the valence of a vertex is defined as the number of its incident edges. 2.3.3 Classification of structures Polygon meshes may be represented in a variety of structures, using different methods to store the vertex, edge and face data. In general they include/",
"title": ""
},
{
"docid": "ed0be5db315ef63c4f96fd21c2ed7110",
"text": "In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 × 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance.",
"title": ""
},
{
"docid": "a96f27e15c3bbc60810b73a5de21a06c",
"text": "Illumination always affects image quality seriously in practice. To weaken illumination effect on image quality, this paper proposes an adaptive gamma correction method. First, a mapping between pixel and gamma values is built. The gamma values are then revised using two non-linear functions to prevent image distortion. Experimental results demonstrate that the proposed method performs better in readjusting image illumination condition and improving image quality.",
"title": ""
},
{
"docid": "bad6560c8c769484a9ce213d0933923e",
"text": "Online support groups have drawn considerable attention from scholars in the past decades. While prior research has explored the interactions and motivations of users, we know relatively little about how culture shapes the way people use and understand online support groups. Drawing on ethnographic research in a Chinese online depression community, we examine how online support groups function in the context of Chinese culture for people with depression. Through online observations and interviews, we uncover the unique interactions among users in this online support group, such as peer diagnosis, peer therapy, and public journaling. These activities were intertwined with Chinese cultural values and the scarcity of mental health resources in China. We also show that online support groups play an important role in fostering individual empowerment and improving public understanding of depression in China. This paper provides insights into the interweaving of culture and online health community use and contributes to a context-rich understanding of online support groups.",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
},
{
"docid": "addad4069782620549e7a357e2c73436",
"text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "45881ab3fc9b2d09f211808e8c9b0a3c",
"text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.",
"title": ""
},
{
"docid": "24e3f865244cd3227db784b0e509edd0",
"text": "The present journal recently stated in the call for a special issue on social sustainability, ―[t]hough sustainable development is said to rest on ̳three pillars‘, one of these—social sustainability—has received significantly less attention than its bio-physical environmental and economic counterparts‖. The current issue promises to engage the concepts of ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ and the tensions between these different aspects of social sustainability. The aim of the present study is to identify the visibility of disabled people in the academic social sustainability literature, to ascertain the impact and promises of social sustainability indicators put forward in the same literature and to engage especially with the concepts of ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ through disability studies and ability studies lenses. We report that disabled people are barely covered in the academic social sustainability literature; of the 5165 academic articles investigated only 26 had content related to disabled people and social sustainability. We also conclude that social sustainability indicators evident in the 1909 academic articles with the phrase ―social sustainability‖ in the abstract mostly focused on products and did not reflect yet the goals outlined in the ―development sustainability‖ aspect of social sustainability proposed by Vallance such as basic needs, building social capital, justice and so on. We posit that if the focus within the social sustainability discourse shifts more toward the social that an active presence of disabled people in this OPEN ACCESS Sustainability 2013, 5 4890 discourse is essential to disabled people. We showcase the utility of an ability studies lens to further the development and application of the ―development sustainability‖, ―bridge sustainability‖ and ―maintenance sustainability‖ concepts. We outline how different ability expectations intrinsic to certain schools of thought of how to deal with human-nature relationships (for example anthropocentric versus bio/ecocentric) impact this relationship and ―bridge sustainability‖. As to ―maintenance development‖, we posit that no engagement has happened yet with the ability expectation conflicts between able-bodied and disabled people, or for that matter with the ability expectation differences between different able-bodied groups within social sustainability discourses; an analysis essential for the maintenance of development. In general, we argue that there is a need to generate ability expectation conflict maps and ability expectations conflict resolution mechanisms for all sustainable development discourses individually and for ability conflicts between sustainable development discourses.",
"title": ""
},
{
"docid": "060501be3e3335530a292a40427cf5cc",
"text": "The more electric aircraft (MEA) has motivated aircraft manufacturers since few decades. Indeed, their investigations lead to the increase of electric power in airplanes. The challenge is to decrease the weight of embedded systems and therefore, the fuel consumption. This is possible thanks to new efficient power electronic converters made of new components. As magnetic components represent a great proportion of their weight, planar components are an interesting solution to increase the power density of some switching mode power supplies. This paper presents the benefits and drawbacks of high-frequency planar transformers in dc/dc converters, different models developed for their design and different issues in MEA context related to planar’s specific geometry and technology.",
"title": ""
}
] |
scidocsrr
|
36648619b1256c6851371e465190c068
|
An inquiry into the nature and causes of the wealth of internet miscreants
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "7154894c0acda12246877c8f3ab8ab57",
"text": "ABSTRACT Characteristics of variable threshold voltage CMOS (VTCMOS) in the series connected circuits are investigated by means of device simulation. It is newly found that the performance degradation due to the body effect in series connected circuit is suppressed by utilizing VTCMOS. Lowering the threshold voltage (Vth) enhances the drive current and alleviates the degradation due to the series connected configuration. Therefore, larger body effect factor (γ) results in lower Vth and higher oncurrent even in the series connected circuits. These characteristics are attributed to the velocity saturation phenomenon which reduces the drain saturation voltage (Vdsat).",
"title": ""
},
{
"docid": "03b48d35417f4bdae67d46c761f2ce0b",
"text": "We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including fMRI, but are discussed with particular reference to PET images which represent changes in cerebral blood flow elicited by a specific cognitive or sensorimotor task. Our main result is an estimate of the P-value for local maxima of Gaussian, t, chi(2) and F fields over search regions of any shape or size in any number of dimensions. This unifies the P-values for large search areas in 2-D (Friston et al. [1991]: J Cereb Blood Flow Metab 11:690-699) large search regions in 3-D (Worsley et al. [1992]: J Cereb Blood Flow Metab 12:900-918) and the usual uncorrected P-value at a single pixel or voxel.",
"title": ""
},
{
"docid": "c6d84be944630cec1b19d84db2ace2ee",
"text": "This paper describes an effort to model a student’s changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNTSM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.",
"title": ""
},
{
"docid": "b42b2496b55c67c284b0399be71e8873",
"text": "We present a method for the online calibration of a compact series elastic actuator installed in a modular snake robot. Calibration is achieved by using the measured motor current of the actuator's highly geared motor and a simple linear model for the spring's estimated torque. A heuristic is developed to identify operating conditions where motor current is an accurate estimator of output torque, even when the motor is heavily geared. This heuristic is incorporated into an unscented Kalman filter that estimates a spring constant in real-time. Using this method on a prototype module of a series elastic snake robot, we are able accurately estimate the module's output torque, even with a poor initial calibration.",
"title": ""
},
{
"docid": "f4c2a00b8a602203c86eaebc6f111f46",
"text": "Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today.",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "f5ad4e1901dc96de45cb191bf1869828",
"text": "The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixedlength vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the crossdomain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.",
"title": ""
},
{
"docid": "206263868f70a1ce6aa734019d215a03",
"text": "This paper examines microblogging information diffusion activity during the 2011 Egyptian political uprisings. Specifically, we examine the use of the retweet mechanism on Twitter, using empirical evidence of information propagation to reveal aspects of work that the crowd conducts. Analysis of the widespread contagion of a popular meme reveals interaction between those who were \"on the ground\" in Cairo and those who were not. However, differences between information that appeals to the larger crowd and those who were doing on-the-ground work reveal important interplay between the two realms. Through both qualitative and statistical description, we show how the crowd expresses solidarity and does the work of information processing through recommendation and filtering. We discuss how these aspects of work mutually sustain crowd interaction in a politically sensitive context. In addition, we show how features of this retweet-recommendation behavior could be used in combination with other indicators to identify information that is new and likely coming from the ground.",
"title": ""
},
{
"docid": "4ea07335d42a859768565c8d88cd5280",
"text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.",
"title": ""
},
{
"docid": "ebb0828b532e8896e87ed4f365f8744a",
"text": "While much attention is given to young people’s online privacy practices on sites like Facebook, current theories of privacy fail to account for the ways in which social media alter practices of information-sharing and visibility. Traditional models of privacy are individualistic, but the realities of privacy reflect the location of individuals in contexts and networks. The affordances of social technologies, which enable people to share information about others, further preclude individual control over privacy. Despite this, social media technologies primarily follow technical models of privacy that presume individual information control. We argue that the dynamics of sites like Facebook have forced teens to alter their conceptions of privacy to account for the networked nature of social media. Drawing on their practices and experiences, we offer a model of networked privacy to explain how privacy is achieved in networked publics.",
"title": ""
},
{
"docid": "fb6d89e2faee942a0a92ded6ead0d8c7",
"text": "Each relationship has its own personality. Almost immediately after a social interaction begins, verbal and nonverbal behaviors become synchronized. Even in asocial contexts, individuals tend to produce utterances that match the grammatical structure of sentences they have recently heard or read. Three projects explore language style matching (LSM) in everyday writing tasks and professional writing. LSM is the relative use of 9 function word categories (e.g., articles, personal pronouns) between any 2 texts. In the first project, 2 samples totaling 1,744 college students answered 4 essay questions written in very different styles. Students automatically matched the language style of the target questions. Overall, the LSM metric was internally consistent and reliable across writing tasks. Women, participants of higher socioeconomic status, and students who earned higher test grades matched with targets more than others did. In the second project, 74 participants completed cliffhanger excerpts from popular fiction. Judges' ratings of excerpt-response similarity were related to content matching but not function word matching, as indexed by LSM. Further, participants were not able to intentionally increase style or content matching. In the final project, an archival study tracked the professional writing and personal correspondence of 3 pairs of famous writers across their relationships. Language matching in poetry and letters reflected fluctuations in the relationships of 3 couples: Sigmund Freud and Carl Jung, Elizabeth Barrett and Robert Browning, and Sylvia Plath and Ted Hughes. Implications for using LSM as an implicit marker of social engagement and influence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "eb8f0a30d222b89e5fda3ea1d83ea525",
"text": "We present a method which exploits automatically generated scientific discourse annotations to create a content model for the summarisation of scientific articles. Full papers are first automatically annotated using the CoreSC scheme, which captures 11 contentbased concepts such as Hypothesis, Result, Conclusion etc at the sentence level. A content model which follows the sequence of CoreSC categories observed in abstracts is used to provide the skeleton of the summary, making a distinction between dependent and independent categories. Summary creation is also guided by the distribution of CoreSC categories found in the full articles, in order to adequately represent the article content. Finally, we demonstrate the usefulness of the summaries by evaluating them in a complex question answering task. Results are very encouraging as summaries of papers from automatically obtained CoreSCs enable experts to answer 66% of complex content-related questions designed on the basis of paper abstracts. The questions were answered with a precision of 75%, where the upper bound for human summaries (abstracts) was 95%.",
"title": ""
},
{
"docid": "03eb1360ba9e3e38f082099ed08469ed",
"text": "In this paper some concept of fuzzy set have discussed and one fuzzy model have applied on agricultural farm for optimal allocation of different crops by considering maximization of net benefit, production and utilization of labour . Crisp values of the objective functions obtained from selected nondominated solutions are converted into triangular fuzzy numbers and ranking of those fuzzy numbers are done to make a decision. .",
"title": ""
},
{
"docid": "404a32f89d6273a63b7ae945514655d2",
"text": "Miniaturized minimally-invasive implants with wireless power and communication links have the potential to enable closed-loop treatments and precise diagnostics. As with wireless power transfer, robust wireless communication between implants and external transceivers presents challenges and tradeoffs with miniaturization and increasing depth. Both link efficiency and available bandwidth need to be considered for communication capacity. This paper analyzes and reviews active electromagnetic and ultrasonic communication links for implants. Example transmitter designs are presented for both types of links. Electromagnetic links for mm-sized implants have demonstrated high data rates sufficient for most applications up to Mbps range; nonetheless, they have so far been limited to depths under 5 cm. Ultrasonic links, on the other hand, have shown much deeper transmission depths, but with limited data rate due to their low operating frequency. Spatial multiplexing techniques are proposed to increase ultrasonic data rates without additional power or bandwidth.",
"title": ""
},
{
"docid": "5054ad32c33dc2650c1dcee640961cd5",
"text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted",
"title": ""
},
{
"docid": "49d1d7c47a52fdaf8d09053f63d225e6",
"text": "Theory of language, communicative competence, functional account of language use, discourse analysis and social-linguistic considerations have mainly made up the theoretical foundations of communicative approach to language teaching. The principles contain taking communication as the center, reflecting Real Communicating Process, avoiding Constant Error-correcting, and putting grammar at a right place.",
"title": ""
},
{
"docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca",
"text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.",
"title": ""
},
{
"docid": "e3ef98c0dae25c39e4000e62a348479e",
"text": "A New Framework For Hybrid Models By Coupling Latent Variables 1 User specifies p with a generative and a discriminative component and latent z p(x, y, z) = p(y|x, z) · p(x, z). The p(y|x, z), p(x, z) can be very general; they only share latent z, not parameters! 2We train both components using a multi-conditional objective α · Eq(x,y)Eq(z|x) ` (y, p(y|x, z)) } {{ } discriminative loss (`2, log) +β ·Df [q(x, z)||p(x, z)] } {{ } f-divergence (KL, JS) where q(x, y) is data distribution and α, β > 0 are hyper-parameters.",
"title": ""
}
] |
scidocsrr
|
7ec7b9d74b2aa147339e866503787244
|
Wireless Sensor Networks for Early Detection of Forest Fires
|
[
{
"docid": "8e0e77e78c33225922b5a45fee9b4242",
"text": "In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.",
"title": ""
}
] |
[
{
"docid": "dbe62d1ffe794e26ac7c8418f3908f70",
"text": "Numerical differentiation in noisy environment is revised through an algebraic approach. For each given order, an explicit formula yielding a pointwise derivative estimation is derived, using elementary differential algebraic operations. These expressions are composed of iterated integrals of the noisy observation signal. We show in particular that the introduction of delayed estimates affords significant improvement. An implementation in terms of a classical finite impulse response (FIR) digital filter is given. Several simulation results are presented.",
"title": ""
},
{
"docid": "9853f157525548a35bcbe118fdefaf33",
"text": "We address the task of 6D pose estimation of known rigid objects from single input images in scenarios where the objects are partly occluded. Recent RGB-D-based methods are robust to moderate degrees of occlusion. For RGB inputs, no previous method works well for partly occluded objects. Our main contribution is to present the first deep learning-based system that estimates accurate poses for partly occluded objects from RGB-D and RGB input. We achieve this with a new instance-aware pipeline that decomposes 6D object pose estimation into a sequence of simpler steps, where each step removes specific aspects of the problem. The first step localizes all known objects in the image using an instance segmentation network, and hence eliminates surrounding clutter and occluders. The second step densely maps pixels to 3D object surface positions, so called object coordinates, using an encoder-decoder network, and hence eliminates object appearance. The third, and final, step predicts the 6D pose using geometric optimization. We demonstrate that we significantly outperform the state-of-the-art for pose estimation of partly occluded objects for both RGB and RGB-D input.",
"title": ""
},
{
"docid": "c077231164a8a58f339f80b83e5b4025",
"text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.",
"title": ""
},
{
"docid": "3bf954a23ea3e7d5326a7b89635f966a",
"text": "The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.",
"title": ""
},
{
"docid": "57bd8c0c2742027de4b599b129506154",
"text": "Software instrumentation is a powerful and flexible technique for analyzing the dynamic behavior of programs. By inserting extra code in an application, it is possible to study the performance and correctness of programs and systems. Pin is a software system that performs run-time binary instrumentation of unmodified applications. Pin provides an API for writing custom instrumentation, enabling its use in a wide variety of performance analysis tasks such as workload characterization, program tracing, cache modeling, and simulation. Most of the prior work on instrumentation systems has focused on executing Unix applications, despite the ubiquity and importance of Windows applications. This paper identifies the Windows-specific obstacles for implementing a process-level instrumentation system, describes a comprehensive, robust solution, and discusses some of the alternatives. The challenges lie in managing the kernel/application transitions, injecting the runtime agent into the process, and isolating the instrumentation from the application. We examine Pin's overhead on typical Windows applications being instrumented with simple tools up to commercial program analysis products. The biggest factor affecting performance is the type of analysis performed by the tool. While the proprietary nature of Windows makes measurement and analysis difficult, Pin opens the door to understanding program behavior.",
"title": ""
},
{
"docid": "8075cc962ce18cea46a8df4396512aa5",
"text": "In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing tasks, such as language modelling and machine translation. This suggests that neural models will also achieve good performance on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using a semantic rather than lexical matching. Although initial iterations of neural models do not outperform traditional lexical-matching baselines, the level of interest and effort in this area is increasing, potentially leading to a breakthrough. The popularity of the recent SIGIR 2016 workshop on Neural Information Retrieval provides evidence to the growing interest in neural models for IR. While recent tutorials have covered some aspects of deep learning for retrieval tasks, there is a significant scope for organizing a tutorial that focuses on the fundamentals of representation learning for text retrieval. The goal of this tutorial will be to introduce state-of-the-art neural embedding models and bridge the gap between these neural models with early representation learning approaches in IR (e.g., LSA). We will discuss some of the key challenges and insights in making these models work in practice, and demonstrate one of the toolsets available to researchers interested in this area.",
"title": ""
},
{
"docid": "0110e37c5525520a4db4b1a775dacddd",
"text": "This paper presents a study of Linux API usage across all applications and libraries in the Ubuntu Linux 15.04 distribution. We propose metrics for reasoning about the importance of various system APIs, including system calls, pseudo-files, and libc functions. Our metrics are designed for evaluating the relative maturity of a prototype system or compatibility layer, and this paper focuses on compatibility with Linux applications. This study uses a combination of static analysis to understand API usage and survey data to weight the relative importance of applications to end users.\n This paper yields several insights for developers and researchers, which are useful for assessing the complexity and security of Linux APIs. For example, every Ubuntu installation requires 224 system calls, 208 ioctl, fcntl, and prctl codes and hundreds of pseudo files. For each API type, a significant number of APIs are rarely used, if ever. Moreover, several security-relevant API changes, such as replacing access with faccessat, have met with slow adoption. Finally, hundreds of libc interfaces are effectively unused, yielding opportunities to improve security and efficiency by restructuring libc.",
"title": ""
},
{
"docid": "ffd84e3418a6d1d793f36bfc2efed6be",
"text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.",
"title": ""
},
{
"docid": "c10829be320a9be6ecbc9ca751e8b56e",
"text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.",
"title": ""
},
{
"docid": "00c19e68020aff7fd86aa7e514cc0668",
"text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "d1a94ed95234d9ea660b6e4779a6a694",
"text": "This study aims to analyse the scientific literature on sustainability and innovation in the automotive sector in the last 13 years. The research is classified as descriptive and exploratory. The process presented 31 articles in line with the research topic in the Scopus database. The bibliometric analysis identified the most relevant articles, authors, keywords, countries, research centers and journals for the subject from 2004 to 2016 in the Industrial Engineering domain. We concluded, through the systemic analysis, that the automotive sector is well structured on the issue of sustainability and process innovation. Innovations in the sector are of the incremental process type, due to the lower risk, lower costs and less complexity. However, the literature also points out that radical innovations are needed in order to fit the prevailing environmental standards. The selected studies show that environmental practices employed in the automotive sector are: the minimization of greenhouse gas emissions, life-cycle assessment, cleaner production, reverse logistics and eco-innovation. Thus, it displays the need for empirical studies in automotive companies on the environmental practices employed and how these practices impact innovation.",
"title": ""
},
{
"docid": "5bf0406864b500084480081d8cddcb82",
"text": "Polymer scaffolds have many different functions in the field of tissue engineering. They are applied as space filling agents, as delivery vehicles for bioactive molecules, and as three-dimensional structures that organize cells and present stimuli to direct the formation of a desired tissue. Much of the success of scaffolds in these roles hinges on finding an appropriate material to address the critical physical, mass transport, and biological design variables inherent to each application. Hydrogels are an appealing scaffold material because they are structurally similar to the extracellular matrix of many tissues, can often be processed under relatively mild conditions, and may be delivered in a minimally invasive manner. Consequently, hydrogels have been utilized as scaffold materials for drug and growth factor delivery, engineering tissue replacements, and a variety of other applications.",
"title": ""
},
{
"docid": "4a1db0cab3812817c3ebb149bd8b3021",
"text": "Structural information in web text provides natural annotations for NLP problems such as word segmentation and parsing. In this paper we propose a discriminative learning algorithm to take advantage of the linguistic knowledge in large amounts of natural annotations on the Internet. It utilizes the Internet as an external corpus with massive (although slight and sparse) natural annotations, and enables a classifier to evolve on the large-scaled and real-time updated web text. With Chinese word segmentation as a case study, experiments show that the segmenter enhanced with the Chinese wikipedia achieves significant improvement on a series of testing sets from different domains, even with a single classifier and local features.",
"title": ""
},
{
"docid": "7788cf06b7c9f09013bd15607e11cd79",
"text": "Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.",
"title": ""
},
{
"docid": "f1a5a1683b6796aebb98afce2068ffff",
"text": "Printed text recognition is an important problem for industrial OCR systems. Printed text is constructed in a standard procedural fashion in most settings. We develop a mathematical model for this process that can be applied to the backward inference problem of text recognition from an image. Through ablation experiments we show that this model is realistic and that a multi-task objective setting can help to stabilize estimation of its free parameters, enabling use of conventional deep learning methods. Furthermore, by directly modeling the geometric perturbations of text synthesis we show that our model can help recover missing characters from incomplete text regions, the bane of multicomponent OCR systems, enabling recognition even when the detection returns incomplete in-",
"title": ""
},
{
"docid": "9b0114697dc6c260610d0badc1d7a2a4",
"text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.",
"title": ""
},
{
"docid": "7025d357898c5997e225299f398c42f0",
"text": "UNLABELLED\nAnnotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/.",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "7131f6062fcb4fd1d532516499105b02",
"text": "Markov influence diagrams (MIDs) are a new type of probabilistic graphical model that extends influence diagrams in the same way that Markov decision trees extend decision trees. They have been designed to build state-transition models, mainly in medicine, and perform cost-effectiveness analyses. Using a causal graph that may contain several variables per cycle, MIDs can model various patient characteristics without multiplying the number of states; in particular, they can represent the history of the patient without using tunnel states. OpenMarkov, an open-source tool, allows the decision analyst to build and evaluate MIDs-including cost-effectiveness analysis and several types of deterministic and probabilistic sensitivity analysis-with a graphical user interface, without writing any code. This way, MIDs can be used to easily build and evaluate complex models whose implementation as spreadsheets or decision trees would be cumbersome or unfeasible in practice. Furthermore, many problems that previously required discrete event simulation can be solved with MIDs; i.e., within the paradigm of state-transition models, in which many health economists feel more comfortable.",
"title": ""
}
] |
scidocsrr
|
3e7af8497d080d88c7873de1ca8a4027
|
Natural Language Semantics Using Probabilistic Logic
|
[
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "70fd543752f17237386b3f8e99954230",
"text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency",
"title": ""
}
] |
[
{
"docid": "11f2adab1fb7a93e0c9009a702389af1",
"text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.",
"title": ""
},
{
"docid": "f5b372607a89ea6595683276e48d6dce",
"text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.",
"title": ""
},
{
"docid": "9228218e663951e54f31d697997c80f9",
"text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.",
"title": ""
},
{
"docid": "de682d74b30e699d7185765f8b235e00",
"text": "A key goal of research in conversational systems is to train an interactive agent to help a user with a task. Human conversation, however, is notoriously incomplete, ambiguous, and full of extraneous detail. To operate effectively, the agent must not only understand what was explicitly conveyed but also be able to reason in the presence of missing or unclear information. When unable to resolve ambiguities on its own, the agent must be able to ask the user for the necessary clarifications and incorporate the response in its reasoning. Motivated by this problem we introduce QRAQ (Query, Reason, and Answer Questions), a new synthetic domain, in which a User gives an Agent a short story and asks a challenge question. These problems are designed to test the reasoning and interaction capabilities of a learningbased Agent in a setting that requires multiple conversational turns. A good Agent should ask only non-deducible, relevant questions until it has enough information to correctly answer the User’s question. We use standard and improved reinforcement learning based memory-network architectures to solve QRAQ problems in the difficult setting where the reward signal only tells the Agent if its final answer to the challenge question is correct or not. To provide an upper-bound to the RL results we also train the same architectures using supervised information that tells the Agent during training which variables to query and the answer to the challenge question. We evaluate our architectures on four QRAQ dataset types, and scale the complexity for each along multiple dimensions.",
"title": ""
},
{
"docid": "753dcf47f0d1d63d2b93a8f4b5d78a33",
"text": "BACKGROUND\nTrichostasis spinulosa (TS) is a common, underdiagnosed cosmetic skin condition.\n\n\nOBJECTIVES\nThe main objectives of this study were to determine the occurrence of TS relative to age and gender, to analyze its cutaneous distribution, and to investigate any possible familial basis for this condition, its impact on patients, and the types and efficacy of previous treatments.\n\n\nMETHODS\nAll patients presenting to the outpatient dermatology clinic at the study institution and their relatives were examined for the presence of TS and were questioned about family history and previous treatment. Photographs and biopsies of suspected cases of TS were obtained.\n\n\nRESULTS\nOf 2400 patients seen between August and December 2013, 286 patients were diagnosed with TS (135 males, 151 females; prevalence: 11.9%). Women presented more frequently than men with complaints of TS (6.3 vs. 4.2%), and more women had received prior treatment for TS (10.5 vs. 2.8%). The most commonly affected sites were the face (100%), interscapular area (10.5%), and arms (3.1%). Lesions involved the nasal alae in 96.2%, the nasal tip in 90.9%, the chin in 55.9%, and the cheeks in 52.4% of patients. Only 15.7% of patients had forehead lesions, and only 4.5% had perioral lesions. Among the 38 previously treated patients, 65.8% reported temporary improvement.\n\n\nCONCLUSIONS\nTrichostasis spinulosa is a common condition that predominantly affects the face in patients of all ages. Additional studies employing larger cohorts from multiple centers will be required to determine the prevalence of TS in the general population.",
"title": ""
},
{
"docid": "65b34f78e3b8d54ad75d32cdef487dac",
"text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.",
"title": ""
},
{
"docid": "8cd8fbbc3e20d29989deeb2fd2362c10",
"text": "Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application. This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems. The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link-and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine. This thesis also describes an implementation of this compiler design, the LLVM compiler infrastructure , proving that the design is feasible. The LLVM compiler infrastructure is a maturing and efficient system, which we show is a good host for a variety of research. More information about LLVM can be found on its web site at: iii Acknowledgments This thesis would not be possible without the support of a large number of people who have helped me both in big ways and little. In particular, I would like to thank my advisor, Vikram Adve, for his support, patience, and especially his trust and respect. He has shown me how to communicate ideas more effectively and how to find important and meaningful topics for research. By being demanding, understanding, and allowing me the freedom to explore my interests, he has driven me to succeed. The inspiration for this work certainly stems from one person: Tanya. She has been a continuous source of support, ideas, encouragement, and understanding. Despite my many late nights, unimaginable amounts of stress, and a truly odd sense of humor, she has not just tolerated me, but loved me. Another person who made this possible, perhaps without truly understanding his contribution, has been Brian Ensink. Brian has been an invaluable sounding board for ideas, a welcoming ear to occasional frustrations, provider …",
"title": ""
},
{
"docid": "4cb0358724add5f51b598b7dd19c3640",
"text": "110 CSEG RECORDER 2006 Special Edition Continued on Page 111 Seismic attributes have come a long way since their intro d u ction in the early 1970s and have become an integral part of seismic interpretation projects. To d a y, they are being used widely for lithological and petrophysical prediction of re s e rvoirs and various methodologies have been developed for their application to broader hydrocarbon exploration and development decision making. Beginning with the digital re c o rding of seismic data in the early 1960s and the ensuing bright spot analysis, the 1970s saw the introduction of complex trace attributes and seismic inversion along with their color displays. This was followed by the development of response attributes, introduction of texture analysis, 2D attributes, horizon and interval attributes and the pervasive use of c o l o r. 3D seismic acquisition dominated the 1990s as the most successful exploration technology of several decades and along with that came the seismic sequence attributes. The c o h e rence technology introduced in the mid 1990s significantly changed the way geophysicists interpreted seismic data. This was followed by the introduction of spectral decomposition in the late 1990s and a host of methods for evaluation of a combination of attributes. These included pattern recognition techniques as well as neural network applications. These developments continued into the new millennium, with enhanced visualization and 3D computation and interpretation of texture and curvature attributes coming to the fore f ront. Of course all this was possible with the power of scientific computing making significant advances during the same period of time. A detailed re c o ns t ruction of these key historical events that lead to the modern seismic attribute analysis may be found in Chopra and Marfurt (2005). The proliferation of seismic attributes in the last two decades has led to attempts to their classification and to bring some order to their chaotic development.",
"title": ""
},
{
"docid": "843ea8a700adf545288175c1062107bb",
"text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.",
"title": ""
},
{
"docid": "96bd733f9168bed4e400f315c57a48e8",
"text": "New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two non-overlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model SBM(n,p,W), n vertices are split into k communities of relative size {pi}i∈[k], and vertices in community i and j connect independently with probability {Wij}i,j∈[k]. This paper investigates the partial and exact recovery of communities in the general SBM (in the constant and logarithmic degree regimes), and uses the generality of the results to tackle overlapping communities. The contributions of the paper are: (i) an explicit characterization of the recovery threshold in the general SBM in terms of a new f-divergence function D+, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KL-divergence in the channel coding theorem, (ii) the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasi-linear time, showing that exact recovery has no information-theoretic to computational gap for multiple communities, (iii) the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signal-to-noise ratio [defined in terms of the spectrum of diag(p)W] tends to infinity.",
"title": ""
},
{
"docid": "1f4b3ad078c42404c6aa27d107026b18",
"text": "This paper presents circuit design methodologies to enhance the electromagnetic immunity of an output-capacitor-free low-dropout (LDO) regulator. To evaluate the noise performance of an LDO regulator in the small-signal domain, power-supply rejection (PSR) is used. We optimize a bandgap reference circuit for optimum dc PSR, and propose a capacitor cancelation technique circuit for bandwidth compensation, and a low-noise biasing circuit for immunity enhancement in the bias circuit. For large-signal, transient performance enhancement, we suggest using a unity-gain amplifier to minimize the voltage difference of the differential inputs of the error amplifier, and an auxiliary N-channel metal oxide semiconductor (NMOS) pass transistor was used to maintain a stable gate voltage in the pass transistor. The effectiveness of the design methodologies proposed in this paper is verified using circuit simulations using an LDO regulator designed by 0.18-$\\mu$m CMOS process. When sine and pulse signals are applied to the input, the worst dc offset variations were enhanced from 36% to 16% and from 31.7% to 9.7%, respectively, as compared with those of the conventional LDO. We evaluated the noise performance versus the conducted electromagnetic interference generated by the dc–dc converter; the noise reduction level was significantly improved.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "1a14570fa1d565aeb78165c72bdf8a4e",
"text": "We investigate the ride-sharing assignment problem from an algorithmic resource allocation point of view. Given a number of requests with source and destination locations, and a number of available car locations, the task is to assign cars to requests with two requests sharing one car. We formulate this as a combinatorial optimization problem, and show that it is NP-hard. We then design an approximation algorithm which guarantees to output a solution with at most 2.5 times the optimal cost. Experiments are conducted showing that our algorithm actually has a much better approximation ratio (around 1.2) on synthetically generated data. Introduction The sharing economy is estimated to grow from $14 billion in 2014 to $335 billion by 2025 (Yaraghi and Ravi 2017). As one of the largest components of sharing economy, ride-sharing provides socially efficient transport services that help to save energy and to reduce congestion. Uber has 40 million monthly active riders reported in October 2016 (Kokalitcheva 2016) and Didi Chuxing has more than 400 million users(Tec 2017). A large portion of the revenue of these companies comes from ride sharing with one car catering two passenger requests, which is the topic investigated in this paper. A typical scenario is as follows: There are a large number of requests with pickup and drop-off location information, and a large number of available cars with current location information. One of the tasks is to assign the requests to the cars, with two requests for one car. The assignment needs to be made socially efficient in the sense that the ride sharing does not incur much extra traveling distance for the drivers or and extra waiting time for the passengers. In this paper we investigate this ride-sharing assignment problem from an algorithmic resource allocation point of view. Formally, suppose that there are a set R of requests {(si, ti) ∈ R : i = 1, . . . ,m} where in request i, an agent is at location si and likes to go to location ti. There are also a set D of taxis {dk ∈ R : k = 1, . . . , n}, with taxi k currently at location dk. The task is to assign two agents i and j to one taxi k, so that the total driving distance is as small as possible. The distance measure d(x, y) here can be Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Manhattan distance (i.e., 1-norm), Euclidean distance (i.e., 2-norm), or distance on graphs if a city map is available. Here for any fixed tuple (k, {i, j}), the driver of taxi k has four possible routes, from the combination of the following two choices: he can pick agent i first or agent j first, and he can drop agent i first or drop agent j first. We assume that the driver is experienced enough to take the best among these four choices. Thus we use the total distance of this best route as the driving cost of tuple (k, {i, j}), denoted by cost(k, {i, j}). We hope to find an assignment M = {(k, {i, j}) : 1 ≤ i, j ≤ m, 1 ≤ k ≤ n} that assigns the maximum number of requests, and in the meanwhile with the cost(M) = ∑ (k,{i,j})∈M cost(k, {i, j}), summation of the driving cost, as small as possible. Here an assignment is a matching in the graph in the sense that each element in R∪D appears at most once in M . In this paper, we formulate this ride-sharing assignment as a combinatorial optimization problem. We show that the problem is NP-hard, and then present an approximation algorithm which, on any input, runs in time O(n) and outputs a solution M with cost(M) at most 2.5 times the optimal value. Our algorithm does not assume specific distance measure; indeed it works for any distance1. We conducted experiments where inputs are generated from uniform distributions and Gaussian mixture distributions. The approximation ratio on these empirical data is about 1.1-1.2, which is much better than the worst case guarantee 2.5. In addition, the results indicate that the larger n and m are, the better the approximation ratio is. Considering that n and m are very large numbers in practice, the performance of our algorithm may be even more satisfactory for practical scenarios. Related Work Ridesharing has become a key feature to increase urban transportation sustainability and is an active field of research. Several pieces of work have looked at dynamic ridesharing (Caramia et al. 2002; Fabri and Recht 2006; Agatz et al. 2012; Santos and Xavier 2013; Alonso-Mora et al. 2017), and multi-hop ridesharing (Herbawi and Weber 2011; Drews and Luxen 2013; Teubner and Flath 2015). That is, the algorithm only needs that d is nonnegative, symmetric and satisfies the triangle inequality. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"title": ""
},
{
"docid": "448d4704991a2bdc086df8f0d7920ec5",
"text": "Global progress in the industrial field, which has led to the definition of the Industry 4.0 concept, also affects other spheres of life. One of them is the education. The subject of the article is to summarize the emerging trends in education in relation to the requirements of Industry 4.0 and present possibilities of their use. One option is using augmented reality as part of a modular learning system. The main idea is to combine the elements of the CPS technology concept with modern IT features, with emphasis on simplicity of solution and hardware ease. The synthesis of these principles can combine in a single image on a conventional device a realistic view at the technological equipment, complemented with interactive virtual model of the equipment, the technical data and real-time process information.",
"title": ""
},
{
"docid": "218b2f7a8e088c1023202bd27164b780",
"text": "The explanation of crime has been preoccupied with individuals and communities as units of analysis. Recent work on offender decision making (Cornish and Clarke, 1986), situations (Clarke, 1983, 1992), environments (Brantingham and Brantingham 1981, 1993), routine activities (Cohen and Felson, 1979; Felson, 1994), and the spatial organization of drug dealing in the U.S. suggest a new unit of analysis: places. Crime is concentrated heavily in a Jew \"hot spots\" of crime (Sherman et aL 1989). The concentration of crime among repeat places is more intensive than it is among repeat offenders (Spelman and Eck, 1989). The components of this concentration are analogous to the components of the criminal careers of persons: onset, desistance, continuance, specialization, and desistance. The theoretical explanationfor variance in these components is also stronger at the level of places than it is for individuals. These facts suggest a need for rethinking theories of crime, as well as a new approach to theorizing about crime for",
"title": ""
},
{
"docid": "f4ee2fa60eb67b7081085ed222627115",
"text": "Recent advances in deep-learning-based applications have attracted a growing attention from the IoT community. These highly capable learning models have shown significant improvements in expected accuracy of various sensory inference tasks. One important and yet overlooked direction remains to provide uncertainty estimates in deep learning outputs. Since robustness and reliability of sensory inference results are critical to IoT systems, uncertainty estimates are indispensable for IoT applications. To address this challenge, we develop ApDeepSense, an effective and efficient deep learning uncertainty estimation method for resource-constrained IoT devices. ApDeepSense leverages an implicit Bayesian approximation that links neural networks to deep Gaussian processes, allowing output uncertainty to be quantified. Our approach is shown to significantly reduce the execution time and energy consumption of uncertainty estimation thanks to a novel layer-wise approximation that replaces the traditional computationally intensive sampling-based uncertainty estimation methods. ApDeepSense is designed for neural net-works trained using dropout; one of the most widely used regularization methods in deep learning. No additional training is needed for uncertainty estimation purposes. We evaluate ApDeepSense using four IoT applications on Intel Edison devices. Results show that ApDeepSense can reduce around 88.9% of the execution time and 90.0% of the energy consumption, while producing more accurate uncertainty estimates compared with state-of-the-art methods.",
"title": ""
},
{
"docid": "3e2e2aace1ddade88f3c8a6b7157af6b",
"text": "Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.",
"title": ""
},
{
"docid": "24006b9eb670c84904b53320fbedd32c",
"text": "Maturity Models have been introduced, over the last four decades, as guides and references for Information System management in organizations from different sectors of activity. In the healthcare field, Maturity Models have also been used to deal with the enormous complexity and demand of Hospital Information Systems. This article presents a research project that aimed to develop a new comprehensive model of maturity for a health area. HISMM (Hospital Information System Maturity Model) was developed to address a complexity of SIH and intends to offer a useful tool for the demanding role of its management. The HISMM has the peculiarity of congregating a set of key maturity Influence Factors and respective characteristics, enabling not only the assessment of the global maturity of a HIS but also the individual maturity of its different dimensions. In this article, we present the methodology for the development of Maturity Models adopted for the creation of HISMM and the underlying reasons for its choice.",
"title": ""
},
{
"docid": "c0d2fcd6daeb433a5729a412828372f8",
"text": "Most 3D reconstruction approaches passively optimise over all data, exhaustively matching pairs, rather than actively selecting data to process. This is costly both in terms of time and computer resources, and quickly becomes intractable for large datasets. This work proposes an approach to intelligently filter large amounts of data for 3D reconstructions of unknown scenes using monocular cameras. Our contributions are twofold: First, we present a novel approach to efficiently optimise the Next-Best View (NBV) in terms of accuracy and coverage using partial scene geometry. Second, we extend this to intelligently selecting stereo pairs by jointly optimising the baseline and vergence to find the NBV’s best stereo pair to perform reconstruction. Both contributions are extremely efficient, taking 0.8ms and 0.3ms per pose, respectively. Experimental evaluation shows that the proposed method allows efficient selection of stereo pairs for reconstruction, such that a dense model can be obtained with only a small number of images. Once a complete model has been obtained, the remaining computational budget is used to intelligently refine areas of uncertainty, achieving results comparable to state-of-the-art batch approaches on the Middlebury dataset, using as little as 3.8% of the views.",
"title": ""
},
{
"docid": "1de2d4e5b74461c142e054ffd2e62c2d",
"text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;",
"title": ""
}
] |
scidocsrr
|
0e64848e074e909fa708e882acdc40ce
|
Weighted color and texture sample selection for image matting
|
[
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
},
{
"docid": "8076620d4905b087d10ee7fba14bd2ec",
"text": "Image matting aims at extracting foreground elements from an image by mean s of color and opacity (alpha) estimation. While a lot of progress has been made in recent years on improv ing the accuracy of matting techniques, one common problem persisted: the low speed of matte computation. We pre sent the first real-time matting technique for natural images and videos. Our technique is based on the obser vation that, for small neighborhoods, pixels tend to share similar attributes. Therefore, independently treating eac h pixel in the unknown regions of a trimap results in a lot of redundant work. We show how this computation can be significantly and safely reduced by means of a careful selection of pairs of background and foreground s amples. Our technique achieves speedups of up to two orders of magnitude compared to previous ones, while producin g high-quality alpha mattes. The quality of our results has been verified through an independent benchmark. The speed of our technique enables, for the first time, real-time alpha matting of videos, and has the potential to enable a n ew class of exciting applications.",
"title": ""
}
] |
[
{
"docid": "0b6ac11cb84a573e55cb75f0bc342d72",
"text": "This paper develops and tests algorithms for predicting the end-to-end route of a vehicle based on GPS observations of the vehicle’s past trips. We show that a large portion of a typical driver’s trips are repeated. Our algorithms exploit this fact for prediction by matching the first part of a driver’s current trip with one of the set of previously observed trips. Rather than predicting upcoming road segments, our focus is on making long term predictions of the route. We evaluate our algorithms using a large corpus of real world GPS driving data acquired from observing over 250 drivers for an average of 15.1 days per subject. Our results show how often and how accurately we can predict a driver’s route as a function of the distance already driven.",
"title": ""
},
{
"docid": "d58c81bf22cdad5c1a669dd9b9a77fbd",
"text": "The rapid increase in healthcare demand has seen novel developments in health monitoring technologies, such as the body area networks (BAN) paradigm. BAN technology envisions a network of continuously operating sensors, which measure critical physical and physiological parameters e.g., mobility, heart rate, and glucose levels. Wireless connectivity in BAN technology is key to its success as it grants portability and flexibility to the user. While radio frequency (RF) wireless technology has been successfully deployed in most BAN implementations, they consume a lot of battery power, are susceptible to electromagnetic interference and have security issues. Intrabody communication (IBC) is an alternative wireless communication technology which uses the human body as the signal propagation medium. IBC has characteristics that could naturally address the issues with RF for BAN technology. This survey examines the on-going research in this area and highlights IBC core fundamentals, current mathematical models of the human body, IBC transceiver designs, and the remaining research challenges to be addressed. IBC has exciting prospects for making BAN technologies more practical in the future.",
"title": ""
},
{
"docid": "8eb51537b051bbf78d87a0cd48e9d90c",
"text": "One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.",
"title": ""
},
{
"docid": "afe4c8e46449bfa37a04e67595d4537b",
"text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.",
"title": ""
},
{
"docid": "9fc2d92c42400a45cb7bf6c998dc9236",
"text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.",
"title": ""
},
{
"docid": "40f2565bd4b167954450c050ac3a9fd7",
"text": "No-limit Texas hold’em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy.",
"title": ""
},
{
"docid": "2d6c085f30847fe3745e0a8d7d93ea9c",
"text": "Deep gated convolutional networks have been proved to be very effective in single channel speech separation. However current state-of-the-art framework often considers training the gated convolutional networks in time-frequency (TF) domain. Such an approach will result in limited perceptual score, such as signal-to-distortion ratio (SDR) upper bound of separated utterances and also fail to exploit an end-to-end framework. In this paper we present an integrated simple and effective end-to-end approach to monaural speech separation, which consists of deep gated convolutional neural networks (GCNN) that takes the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. In addition long shortterm memory (LSTM) is employed for long term temporal modeling. For the objective, we propose to train the network by directly optimizing utterance level SDR in a permutation invariant training (PIT) style. Our experiments on the public WSJ0-2mix data corpus demonstrate that this new scheme can produce more discriminative separated utterances and leading to performance improvement on the speaker separation task.",
"title": ""
},
{
"docid": "9b06bfb67641fa009e51e1077b7a2434",
"text": "This paper presents the results of an exploratory study carried out to learn about the use and impact of Information and Communication Technologies (ICT) on Small and Medium Sized Enterprises (SMEs) in Oman. The study investigates ICT infrastructure, software used, driver for ICT investment, perceptions about business benefits of ICT and outsourcing trends of SMEs. The study provides an insight on the barriers for the adoption of ICT. Data on these aspects of ICT was collected from 51 SMEs through a survey instrument. The results of the study show that only a small number of SMEs in Oman are aware of the benefits of ICT adoption. The main driving forces for ICT investment are to provide better and faster customer service and to stay ahead of the competition. A majority of surveyed SMEs have reported a positive performance and other benefits by utilizing ICT in their businesses. Majority of SMEs outsource most of their ICT activities. Lack of internal capabilities, high cost of ICT and lack of information about suitable ICT solutions and implementation were some of the major barriers in adopting ICT. These findings are consistent with other studies e.g. (Harindranath et al 2008). There is a need for more focus and concerted efforts on increasing awareness among SMEs on the benefits of ICT adoption. The results of the study recognize the need for more training facilities in ICT for SMEs, measures to provide ICT products and services at an affordable cost, and availability of free professional advice and consulting at reasonable cost to SMEs. Our findings therefore have important implication for policy aimed at ICT adoption and use by SMEs. The findings of this research will provide a foundation for future research and will help policy makers in understanding the current state of affairs of the usage and impact of ICT on SMEs in Oman.",
"title": ""
},
{
"docid": "9faf87e51078bb92f146ba4d31f04c7f",
"text": "This paper first describes the problem of goals nonreachable with obstacles nearby when using potential field methods for mobile robot path planning. Then, new repulsive potential functions are presented by taking the relative distance between the robot and the goal into consideration, which ensures that the goal position is the global minimum of the total potential.",
"title": ""
},
{
"docid": "cfe31ce3a6a23d9148709de6032bd90b",
"text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.",
"title": ""
},
{
"docid": "ae937be677ca7c0714bde707816171ff",
"text": "The authors examined how time orientation and morningness-eveningness relate to 2 forms of procrastination: indecision and avoidant forms. Participants were 509 adults (M age = 49.78 years, SD = 6.14) who completed measures of time orientation, morningness-eveningness, decisional procrastination (i.e., indecision), and avoidant procrastination. Results showed that morningness was negatively related to avoidant procrastination but not decisional procrastination. Overall, the results indicated different temporal profiles for indecision and avoidant procrastinations. Avoidant procrastination related to low future time orientation and low morningness, whereas indecision related to both (a) high negative and high positive past orientations and (b) low present-hedonistic and low future time orientations. The authors inferred that distinct forms of procrastination seem different on the basis of dimensions of time.",
"title": ""
},
{
"docid": "d8d86da66ebeaae73e9aaa2a30f18bb5",
"text": "In this paper, a novel approach to the characterization of structural damage in civil structures is presented. Structural damage often results in subtle changes to structural stiffness and damping properties that are manifested by changes in the location of transfer function characteristic equation roots (poles) upon the complex plane. Using structural response time-history data collected from an instrumented structure, transfer function poles can be estimated using traditional system identification methods. Comparing the location of poles corresponding to the structure in an unknown structural state to those of the undamaged structure, damage can be accurately identified. The IASC-ASCE structural health monitoring benchmark structure is used in this study to illustrate the merits of the transfer function pole migration approach to damage detection in civil structures.",
"title": ""
},
{
"docid": "2f362f4c9b56a44af8e93dad107e3995",
"text": "Microstrip filters are widely used in microwave circuit, This paper briefly describes the design principle of microstrip bandstop filter (BSF). A compact wide band high rejection BSF is presented. This filter consists of two parts: defected ground structures filter (DGS) and spurline filter. Due to the inherently compact characteristics of the spurline and DGS, the proposed filter shows a better rejection performance than open stub BSF in the same circuit size. The results of simulation and optimization given by HFSS12 prove the correctness of the design.",
"title": ""
},
{
"docid": "45b1cb6c9393128c9a9dcf9dbeb50778",
"text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.",
"title": ""
},
{
"docid": "d46c44e5a4bc2e0dd1423394534409d3",
"text": "This paper describes a heterogeneous computer cluster called Axel. Axel contains a collection of nodes; each node can include multiple types of accelerators such as FPGAs (Field Programmable Gate Arrays) and GPUs (Graphics Processing Units). A Map-Reduce framework for the Axel cluster is presented which exploits spatial and temporal locality through different types of processing elements and communication channels. The Axel system enables the first demonstration of FPGAs, GPUs and CPUs running collaboratively for N-body simulation. Performance improvement from 4.4 times to 22.7 times has been achieved using our approach, which shows that the Axel system can combine the benefits of the specialization of FPGA, the parallelism of GPU, and the scalability of computer clusters.",
"title": ""
},
{
"docid": "28d7c171b05309d9a4ec4aa9ec4f66e1",
"text": "A cost and energy efficient method of wind power generation is to connect the output of the turbine to a doubly-fed induction generator (DFIG), allowing operation at a range of variable speeds. While for electrical engineers the electromagnetic components in such a system, like the electric machine, power electronic converter and magnetic filters are of most interest, a DFIG wind turbine is a complex design involving multiple physical domains strongly interacting with each other. The electrical system, for instance, is influenced by the converter’s cooling system and mechanical components, including the rotor blades, shaft and gearbox. This means that during component selection and design of control schemes, the influence of domains on one another must be considered in order to achieve an optimized overall system performance such that the design is dynamic, efficient and cost-effective. In addition to creating an accurate model of the entire system, it is also important to model the real-world operating and fault conditions. For fast prototyping and performance prediction, computer-based simulation has been widely adopted in the engineering development process. Modeling such complex systems while including switching power electronic converters requires a powerful and robust simulation tool. Furthermore, a rapid solver is critical to allow for developing multiple iterative enhancements based on insight gained through system simulation studies.",
"title": ""
},
{
"docid": "90b59d264de9bc4054f4905c47e22596",
"text": "Bronson (1974) reviewed evidence in support of the claim that the development of visually guided behavior in the human infant over the first few months of life represents a shift from subcortical to cortical visual processing. Recently, this view has been brought into question for two reasons; first, evidence revealing apparently sophisticated perceptual abilities in the newborn, and second, increasing evidence for multiple cortica streams of visual processing. The present paper presents a reanalysis of the relation between the maturation of cortical pathways and the development of visually guided behavior, focusing in particular on how the maturational state of the primary visual cortex may constrain the functioning of neural pathways subserving oculomotor control.",
"title": ""
},
{
"docid": "e8824408140898ac81fba94530f6e43e",
"text": "The Bag-of-Visual-Words model has emerged as an effective approach to represent local video features for human actions classification. However, one of the major challenges in this model is the generation of the visual vocabulary. In the case of human action recognition, losing spatial-temporal relationships is one of the important reasons that provokes the low descriptive power of classic visual words. In this work we propose a three-level approach to construct visual n-grams for human action classification. First, in order to reduce the number of non-descriptive words generated by K-means clustering of the spatio-temporal interest points, we propose to apply a variant of the classsical Leader-Follower clustering algorithm to create an optimal vocabulary from a pre-established number of visual words. Second, with the aim of incorporating spatial and temporal constraints to the Bag-of-Visual-Words model, we exploit the spatio-temporal relationships between interest points to build a graphbased representation of the video. Frequent subgraphs are extracted for each action class and a visual vocabulary of n-grams is constructed from the labels (descriptors) of selected subgraphs. Finally, we build a histogram by using the frequency of each n-gram in the graph representing a video of human action. The proposed approach combines the representational power of graphs with the efficiency of the Bag-of-Visual-Words model. Extensive validation on five challenging human actions datasets demonstrates the effectiveness of the proposed model compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "3902afc560de6f0b028315977bc55976",
"text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.",
"title": ""
},
{
"docid": "3b6cef052cd7a7acc765b44292af51cc",
"text": "Minimizing travel time is critical for the successful operation of emergency vehicles. Preemption can significantly help emergency vehicles reach the intended destination faster. Majority of the current studies focus on minimizing and/or eliminating delays for EVs and do not consider the negative impacts of preemption on urban traffic. One primary negative impact is extended delays for non-EV traffic due to preemption that is addressed in this paper. We propose an Adaptive Preemption of Traffic (APT) system for Emergency Vehicles in an Intelligent Transportation System. We utilize the knowledge of current traffic conditions in the transportation system to adaptively preempt traffic at signals along the path of EVs so as to minimize, if not eliminate stopped delays for EVs while simultaneously minimizing the delays for non-emergency vehicles in the system. Through extensive simulation results, we show substantial reduction in delays for both EVs.",
"title": ""
}
] |
scidocsrr
|
8660ab87ee327c21c41fe597b20ef4de
|
An Artificial Intelligence Approach to Financial Fraud Detection under IoT Environment: A Survey and Implementation
|
[
{
"docid": "007706ad8c73376db70af36a66cedf14",
"text": "— With the developments in the Information Technology and improvements in the communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on decision trees and support vector machines (SVM) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of SVM and decision tree methods in credit card fraud detection with a real data set.",
"title": ""
},
{
"docid": "e43c27b652de5c015450f542c1eb8dd2",
"text": "Financial fraud is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. The companies and financial institution loose huge amounts due to fraud and fraudsters continuously try to find new rules and tactics to commit illegal actions. Thus, fraud detection systems have become essential for all credit card issuing banks to minimize their losses. The most commonly used fraud detection methods are Neural Network (NN), rule-induction techniques, fuzzy system, decision trees, Support Vector Machines (SVM), Artificial Immune System (AIS), genetic algorithms, K-Nearest Neighbor algorithms. These techniques can be used alone or in collaboration using ensemble or meta-learning techniques to build classifiers. This paper presents a survey of various techniques used in credit card fraud detection and evaluates each methodology based on certain design criteria. And this survey enables us to build a hybrid approach for developing some effective algorithms which can perform well for the classification problem with variable misclassification costs and with higher accuracy.",
"title": ""
},
{
"docid": "f36348f2909a9642c18590fca6c9b046",
"text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.",
"title": ""
},
{
"docid": "66248db37a0dcf8cb17c075108b513b4",
"text": "Since past few years there is tremendous advancement in electronic commerce technology, and the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper we present the necessary theory to detect fraud in credit card transaction processing using a Hidden Markov Model (HMM). An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected by using an enhancement to it(Hybrid model).In further sections we compare different methods for fraud detection and prove that why HMM is more preferred method than other methods.",
"title": ""
},
{
"docid": "5523695d47205129d0e5f6916d2d14f1",
"text": "A phenomenal growth in the number of credit card transactions, especially for online purchases, has recently led to a substantial rise in fraudulent activities. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. In real life, fraudulent transactions are interspersed with genuine transactions and simple pattern matching is not often sufficient to detect them accurately. Thus, there is a need for combining both anomaly detection as well as misuse detection techniques. In this paper, we propose to use two-stage sequence alignment in which a profile analyzer (PA) first determines the similarity of an incoming sequence of transactions on a given credit card with the genuine cardholder's past spending sequences. The unusual transactions traced by the profile analyzer are next passed on to a deviation analyzer (DA) for possible alignment with past fraudulent behavior. The final decision about the nature of a transaction is taken on the basis of the observations by these two analyzers. In order to achieve online response time for both PA and DA, we suggest a new approach for combining two sequence alignment algorithms BLAST and SSAHA.",
"title": ""
}
] |
[
{
"docid": "74235290789c24ce00d54541189a4617",
"text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "06ef397d13383ff09f2f6741c0626192",
"text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.",
"title": ""
},
{
"docid": "4d0921d8dd1004f0eed02df0ff95a092",
"text": "The “open classroom” emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to the affordances of open space learning environments. We outline a case study of teacher perceptions of working in new open plan school buildings. The case study demonstrates that affordances of open space classrooms include flexibility, visibility and scrutiny, and a de-emphasis of authority; teacher reactions included collective practice, team orientation, and increased interactions and a democratisation of authority. We argue that teacher reaction to the new open classroom features adaptability, intensification of day-to-day practice, and intraand inter-personal knowledge and skills.",
"title": ""
},
{
"docid": "33b8417f25b56e5ea9944f9f33fc162c",
"text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.",
"title": ""
},
{
"docid": "1eee94436ff7c65b18908dab7fbfb1c6",
"text": "Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in theWild (LFW) database has been widely used. However, the standard LFW protocol is very limited, with only 3,000 genuine and 3,000 impostor matches for classification. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate may still be high (e.g. 3%). Furthermore, performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches. Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation under both verification and open-set identification scenarios, with a focus at low FARs. Based on the new benchmark, we evaluate 21 face recognition approaches by combining 3 kinds of features and 7 learning algorithms. The benchmark results show that the best algorithm achieves 41.66% verification rates at FAR=0.1%, and 18.07% open-set identification rates at rank 1 and FAR=1%. Accordingly we conclude that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms. We thereby release a benchmark tool to advance research in this field.",
"title": ""
},
{
"docid": "d735547a7b3a79f5935f15da3e51f361",
"text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.",
"title": ""
},
{
"docid": "dc810b43c71ab591981454ad20e34b7a",
"text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.",
"title": ""
},
{
"docid": "ff67540fcba29de05415c77744d3a21d",
"text": "Using Youla Parametrization and Linear Matrix Inequalities (LMI) a Multiobjective Robust Control (MRC) design for continuous linear time invariant (LTI) systems with bounded uncertainties is described. The design objectives can be a combination of H∞-, H2-performances, constraints on the control signal, etc.. Based on an initial stabilizing controller all stabilizing controllers for the uncertain system can be described by the Youla parametrization. Given this representation, all objectives can be formulated by independent Lyapunov functions, increasing the degree of freedom for the control design.",
"title": ""
},
{
"docid": "67e2bbbbd0820bb47f04258eb4917cc1",
"text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that the supply side in the sharing economy often includes individual nonprofessional decision makers, in addition to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performance of professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, while controlling for property and market characteristics. We demonstrate that these performance differences between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofes-sional hosts are less likely to offer different rates across stay dates based on the underlying demand patterns, such as those created by major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.",
"title": ""
},
{
"docid": "3250454b6363a9bb49590636d9843a92",
"text": "A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hardware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98% sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.",
"title": ""
},
{
"docid": "87e52d72533c26f59af13aaea0ea4b7f",
"text": "This study investigated the work role attachment and retirement intentions of public school teachers in Calabar, Nigeria. It was motivated by the observation that most public school workers lack plans for retirement and as such do not prepare for it until it suddenly dawns on them. Few empirical studies were reviewed. Questionnaire was the main instrument used for data collection from a sample of 200 teachers. Independent t-test was used to test the stated hypotheses at 0.05 level of significance. Results showed that the committed/attached/involved workers have retirement intention to take a part-time job after retirement. The uncommitted/unattached/uninvolved workers have intention to retire earlier than those attached to their work. It was recommended that pre-retirement counselling should be adopted to assist teachers to develop good retirement plans.",
"title": ""
},
{
"docid": "c828195cfc88abd598d1825f69932eb0",
"text": "The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns.",
"title": ""
},
{
"docid": "b23d73e29fc205df97f073eb571a2b47",
"text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0bcec8496b655fffa3591d36fbd5c230",
"text": "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone/senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2% relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7% WERR with 40 adaptation utterances against the un-adapted DNN models.",
"title": ""
},
{
"docid": "2afcc7c1fb9dadc3d46743c991e15bac",
"text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design",
"title": ""
},
{
"docid": "a79c65e76da81044ee7e81fc40fe5f8e",
"text": "Most of the equipment required is readily available in most microwave labs: a vector network analyzer, a microwave signal generator, and, of course, a sampling oscilloscope. In this paper, the authors summarize many of the corrections discussed in \" Terminology for high-speed sampling-oscilloscope calibration\" [Williams et al., 2006] and \"Magnitude and phase calibrations for RF, microwave, and high-speed digital signal measurements\" [Remley and Hale, 2007] that are necessary for metrology-grade measurements and Illustrate the application of these oscilloscopes to the characterization of microwave signals.",
"title": ""
},
{
"docid": "25779dfc55dc29428b3939bb37c47d50",
"text": "Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.",
"title": ""
},
{
"docid": "c4aafcc0a98882de931713359e55a04a",
"text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.",
"title": ""
},
{
"docid": "5546cbb6fac77d2d9fffab8ba0a50ed8",
"text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c1e2a84ff4366325837e576dd0549e24
|
High gain 2.45 GHz 2×2 patch array stacked antenna
|
[
{
"docid": "3bb4d0f44ed5a2c14682026090053834",
"text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.",
"title": ""
}
] |
[
{
"docid": "322161b4a43b56e4770d239fe4d2c4c0",
"text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
"title": ""
},
{
"docid": "1561ef2d0c846e8faa765aae2a7ad922",
"text": "We propose a novel monocular visual inertial odometry algorithm that combines the advantages of EKF-based approaches with those of direct photometric error minimization methods. The method is based on sparse, very small patches and incorporates the minimization of photometric error directly into the EKF measurement model so that inertial data and vision-based surface measurements are used simultaneously during camera pose estimation. We fuse vision-based and inertial measurements almost at the raw-sensor level, allowing the estimated system state to constrain and guide image-space measurements. Our formulation allows for an efficient implementation that runs in real-time on a standard CPU and has several appealing and unique characteristics such as being robust to fast camera motion, in particular rotation, and not depending on the presence of corner-like features in the scene. We experimentally demonstrate robust and accurate performance compared to ground truth and show that our method works on scenes containing only non-intersecting lines.",
"title": ""
},
{
"docid": "be1ac1b39ed75cb2ae2739ea1a443821",
"text": "In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n) space and the other runs with O(∆) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n×n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O(∆) time delay and in O(n + m) space, and the last one runs with O(∆) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems.",
"title": ""
},
{
"docid": "ad48ba2fa5ab113fbdf5d9c148f9596d",
"text": "BACKGROUND\nThe Prophylactic hypOthermia to Lessen trAumatic bRain injury-Randomised Controlled Trial (POLAR-RCT) will evaluate whether early and sustained prophylactic hypothermia delivered to patients with severe traumatic brain injury improves patient-centred outcomes.\n\n\nMETHODS\nThe POLAR-RCT is a multicentre, randomised, parallel group, phase III trial of early, prophylactic cooling in critically ill patients with severe traumatic brain injury, conducted in Australia, New Zealand, France, Switzerland, Saudi Arabia and Qatar. A total of 511 patients aged 18-60 years have been enrolled with severe acute traumatic brain injury. The trial intervention of early and sustained prophylactic hypothermia to 33 °C for 72 h will be compared to standard normothermia maintained at a core temperature of 37 °C. The primary outcome is the proportion of favourable neurological outcomes, comprising good recovery or moderate disability, observed at six months following randomisation utilising a midpoint dichotomisation of the Extended Glasgow Outcome Scale (GOSE). Secondary outcomes, also assessed at six months following randomisation, include the probability of an equal or greater GOSE level, mortality, the proportions of patients with haemorrhage or infection, as well as assessment of quality of life and health economic outcomes. The planned sample size will allow 80% power to detect a 30% relative risk increase from 50% to 65% (equivalent to a 15% absolute risk increase) in favourable neurological outcome at a two-sided alpha of 0.05.\n\n\nDISCUSSION\nConsistent with international guidelines, a detailed and prospective analysis plan has been developed for the POLAR-RCT. This plan specifies the statistical models for evaluation of primary and secondary outcomes, as well as defining covariates for adjusted analyses and methods for exploratory analyses. Application of this statistical analysis plan to the forthcoming POLAR-RCT trial will facilitate unbiased analyses of these important clinical data.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov, NCT00987688 (first posted 1 October 2009); Australian New Zealand Clinical Trials Registry, ACTRN12609000764235 . Registered on 3 September 2009.",
"title": ""
},
{
"docid": "4467f4fc7e9f1199ca6b57f7818ca42c",
"text": "Banking in several developing countries has transcended from a traditional brick-and mortar model of customers queuing for services in the banks to modern day banking where banks can be reached at any point for their services. This can be attributed to the tremendous growth in mobile penetration in many countries across the globe including Jordan. The current exploratory study is an attempt to identify the underlying factors that affects mobile banking adoption in Jordan. Data for this study have been collected using a questionnaire containing 22 questions. Out of 450 questionnaires that have been distributed, 301 are returned (66.0%). In the survey, factors that may affect Jordanian mobile phone users' to adopt mobile banking services were examined. The research findings suggested that all the six factors; self efficacy, trailability, compatibility, complexity, risk and relative advantage were statistically significant in influencing mobile banking adoption.",
"title": ""
},
{
"docid": "3f807cb7e753ebd70558a0ce74b416b7",
"text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a33e8a616955971014ceea9da1e8fcbe",
"text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.",
"title": ""
},
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "cf341e272dcc4773829f09e36a0519b3",
"text": "Malicious Web sites are a cornerstone of Internet criminal activities. The dangers of these sites have created a demand for safeguards that protect end-users from visiting them. This article explores how to detect malicious Web sites from the lexical and host-based features of their URLs. We show that this problem lends itself naturally to modern algorithms for online learning. Online algorithms not only process large numbers of URLs more efficiently than batch algorithms, they also adapt more quickly to new features in the continuously evolving distribution of malicious URLs. We develop a real-time system for gathering URL features and pair it with a real-time feed of labeled URLs from a large Web mail provider. From these features and labels, we are able to train an online classifier that detects malicious Web sites with 99% accuracy over a balanced dataset.",
"title": ""
},
{
"docid": "8588a3317d4b594d8e19cb005c3d35c7",
"text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.",
"title": ""
},
{
"docid": "955201c5191774ca14ea38e473bd7d04",
"text": "We advocate a relation based approach to Argumentation Mining. Our focus lies on the extraction of argumentative relations instead of the identification of arguments, themselves. By classifying pairs of sentences according to the relation that holds between them we are able to identify sentences that may be factual when considered in isolation, but carry argumentative meaning when read in context. We describe scenarios in which this is useful, as well as a corpus of annotated sentence pairs we are developing to provide a testbed for this approach.",
"title": ""
},
{
"docid": "c0c30c3b9539511e9079ec7894ad754f",
"text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.",
"title": ""
},
{
"docid": "f05d7f391d6d805308801d23bc3234f0",
"text": "Identifying patterns in large high dimensional data sets is a challenge. As the number of dimensions increases, the patterns in the data sets tend to be more prominent in the subspaces than the original dimensional space. A system to facilitate presentation of such subspace oriented patterns in high dimensional data sets is required to understand the data.\n Heidi is a high dimensional data visualization system that captures and visualizes the closeness of points across various subspaces of the dimensions; thus, helping to understand the data. The core concept behind Heidi is based on prominence of patterns within the nearest neighbor relations between pairs of points across the subspaces.\n Given a d-dimensional data set as input, Heidi system generates a 2-D matrix represented as a color image. This representation gives insight into (i) how the clusters are placed with respect to each other, (ii) characteristics of placement of points within a cluster in all the subspaces and (iii) characteristics of overlapping clusters in various subspaces.\n A sample of results displayed and discussed in this paper illustrate how Heidi Visualization can be interpreted.",
"title": ""
},
{
"docid": "8ca55e6a146406634335ccc1914a09d2",
"text": "In this paper we present the results of a simulation study to explore the ability of Bayesian parametric and nonparametric models to provide an adequate fit to count data, of the type that would routinely be analyzed parametrically either through fixed-effects or random-effects Poisson models. The context of the study is a randomized controlled trial with two groups (treatment and control). Our nonparametric approach utilizes several modeling formulations based on Dirichlet process priors. We find that the nonparametric models are able to flexibly adapt to the data, to offer rich posterior inference, and to provide, in a variety of settings, more accurate predictive inference than parametric models.",
"title": ""
},
{
"docid": "3bf5eaa6400ae63000a1d100114fe8fd",
"text": "In Fig. 4e of this Article, the labels for ‘Control’ and ‘HFD’ were reversed (‘Control’ should have been labelled blue rather than purple, and ‘HFD’ should have been labelled purple rather than blue). Similarly, in Fig. 4f of this Article, the labels for ‘V’ and ‘GW’ were reversed (‘V’ should have been labelled blue rather than purple, and ‘GW’ should have been labelled purple instead of blue). The original figure has been corrected online.",
"title": ""
},
{
"docid": "f309d2f237f4451bea75767f53277143",
"text": "Most problems in computational geometry are algebraic. A general approach to address nonrobustness in such problems is Exact Geometric Computation (EGC). There are now general libraries that support EGC for the general programmer (e.g., Core Library, LEDA Real). Many applications require non-algebraic functions as well. In this paper, we describe how to provide non-algebraic functions in the context of other EGC capabilities. We implemented a multiprecision hypergeometric series package which can be used to evaluate common elementary math functions to an arbitrary precision. This can be achieved relatively easily using the Core Library which supports a guaranteed precision level of accuracy. We address several issues of efficiency in such a hypergeometric package: automatic error analysis, argument reduction, preprocessing of hypergeometric parameters, and precomputed constants. Some preliminary experimental results are reported.",
"title": ""
},
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
},
{
"docid": "417ba025ea47d354b8e087d37ddb3655",
"text": "User satisfaction in computer games seems to be influenced by game balance, the level of challenge faced by the user. This work presents an evaluation, performed by human players, of dynamic game balancing approaches. The results indicate that adaptive approaches are more effective. This paper also enumerates some issues encountered in evaluating users’ satisfaction, in the context of games, and depicts some learned lessons.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
},
{
"docid": "231554e78d509e7bca2dfd4280b411bb",
"text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.",
"title": ""
}
] |
scidocsrr
|
a82b5f0f33766489ce3850beaf3612e8
|
Meta Networks
|
[
{
"docid": "592eddc5ada1faf317571e8050d4d82e",
"text": "Connectionist models usually have a single weight on each connection. Some interesting new properties emerge if each connection has two weights: A slowly changing, plastic weight which stores long-term knowledge and a fast-changing, elastic weight which stores temporary knowledge and spontaneously decays towards zero. If a network learns a set of associations and then these associations are \"blurred\" by subsequent learning, all the original associations can be \"deblurred\" by rehearsing on just a few of them. The rehearsal allows the fast weights to take on values that temporarily cancel out the changes in the slow weights caused by the subsequent learning.",
"title": ""
},
{
"docid": "66e5c7802dc1f3427dc608696a925f6d",
"text": "Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks might benefit from variables that change slower than activities but much faster than the standard weights. These “fast weights” can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models. By using fast weights we can avoid the need to store copies of neural activity patterns.",
"title": ""
},
{
"docid": "a4bfad793a7dde2c8b7e0238b1ffc536",
"text": "Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value.",
"title": ""
}
] |
[
{
"docid": "f5b500c143fd584423ee8f0467071793",
"text": "Drug-Drug Interactions (DDIs) are major causes of morbidity and treatment inefficacy. The prediction of DDIs for avoiding the adverse effects is an important issue. There are many drug-drug interaction pairs, it is impossible to do in vitro or in vivo experiments for all the possible pairs. The limitation of DDIs research is the high costs. Many drug interactions are due to alterations in drug metabolism by enzymes. The most common among these enzymes are cytochrome P450 enzymes (CYP450). Drugs can be substrate, inhibitor or inducer of CYP450 which will affect metabolite of other drugs. This paper proposes enzyme action crossing attribute creation for DDIs prediction. Machine learning techniques, k-Nearest Neighbor (k-NN), Neural Networks (NNs), and Support Vector Machine (SVM) were used to find DDIs for simvastatin based on enzyme action crossing. SVM preformed the best providing the predictions at the accuracy of 70.40 % and of 81.85 % with balance and unbalance class label datasets respectively. Enzyme action crossing method provided the new attribute that can be used to predict drug-drug interactions.",
"title": ""
},
{
"docid": "7730b770c0be4a86a926cbae902c1416",
"text": "In this paper, we propose an end-to-end trainable Convolutional Neural Network (CNN) architecture called the M-net, for segmenting deep (human) brain structures from Magnetic Resonance Images (MRI). A novel scheme is used to learn to combine and represent 3D context information of a given slice in a 2D slice. Consequently, the M-net utilizes only 2D convolution though it operates on 3D data, which makes M-net memory efficient. The segmentation method is evaluated on two publicly available datasets and is compared against publicly available model based segmentation algorithms as well as other classification based algorithms such as Random Forrest and 2D CNN based approaches. Experiment results show that the M-net outperforms all these methods in terms of dice coefficient and is at least 3 times faster than other methods in segmenting a new volume which is attractive for clinical use.",
"title": ""
},
{
"docid": "c26b4db8f52e4270f24c16b0e65c8b59",
"text": "An open stub feed planar patch antenna is proposed for UHF RFID tag mountable on metallic objects. Compared to conventional short stub feed patch antenna used for UHF RFID tag, the open stub feed patch antenna has planar structure which can decrease the manufacturing cost of the tags. Moreover, the open stub feed makes the impedance of the patch antenna be tuned in a large scale for conjugate impedance matching. Modeling and simulation results are presented which are in good agreement with the measurement data. Finally, differences between the open stub feed patch antenna and the short stub feed patch antenna for UHF RFID tag are discussed.",
"title": ""
},
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "71b6f880ae22e8032950379cd57b5003",
"text": "Our goal is to generate reading lists for students that help them optimally learn technical material. Existing retrieval algorithms return items directly relevant to a query but do not return results to help users read about the concepts supporting their query. This is because the dependency structure of concepts that must be understood before reading material pertaining to a given query is never considered. Here we formulate an information-theoretic view of concept dependency and present methods to construct a “concept graph” automatically from a text corpus. We perform the first human evaluation of concept dependency edges (to be published as open data), and the results verify the feasibility of automatic approaches for inferring concepts and their dependency relations. This result can support search capabilities that may be tuned to help users learn a subject rather than retrieve documents based on a single query.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "04953f3a55a77b9a35e7cea663c6387e",
"text": "-This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization",
"title": ""
},
{
"docid": "e294a94b03a2bd958def360a7bce2a46",
"text": "The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations.",
"title": ""
},
{
"docid": "19fe7a55a8ad6f206efc27ef7ff16324",
"text": "Vehicular adhoc networks (VANETs) are relegated as a subgroup of Mobile adhoc networks (MANETs), with the incorporation of its principles. In VANET the moving nodes are vehicles which are self-administrated, not bounded and are free to move and organize themselves in the network. VANET possess the potential of improving safety on roads by broadcasting information associated with the road conditions. This results in generation of the redundant information been disseminated by vehicles. Thus bandwidth issue becomes a major concern. In this paper, Location based data aggregation technique is been proposed for aggregating congestion related data from the road areas through which vehicles travelled. It also takes into account scheduling mechanism at the road side units (RSUs) for treating individual vehicles arriving in its range on the basis of first-cum-first order. The basic idea behind this work is to effectually disseminate the aggregation information related to congestion to RSUs as well as to the vehicles in the network. The Simulation results show that the proposed technique performs well with the network load evaluation parameters.",
"title": ""
},
{
"docid": "c0ef15616ba357cb522b828e03a5298c",
"text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.",
"title": ""
},
{
"docid": "4f9b168efee2348f0f02f2480f9f449f",
"text": "Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment. The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.",
"title": ""
},
{
"docid": "3d62d442398bfa8c1ffb9dcf4e05c5ce",
"text": "With the explosion of Web 2.0 application such as blogs, social and professional networks, and various other types of social media, the rich online information and various new sources of knowledge flood users and hence pose a great challenge in terms of information overload. It is critical to use intelligent agent software systems to assist users in finding the right information from an abundance of Web data. Recommender systems can help users deal with information overload problem efficiently by suggesting items (e.g., information and products) that match users’ personal interests. The recommender technology has been successfully employed in many applications such as recommending films, music, books, etc. The purpose of this report is to give an overview of existing technologies for building personalized recommender systems in social networking environment, to propose a research direction for addressing user profiling and cold start problems by exploiting user-generated content newly available in Web 2.0.",
"title": ""
},
{
"docid": "c39295b4334a22547b2c4370ef329a7c",
"text": "In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in realtime. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods is validated via extensive simulations. key words: Internet of Things, mobile edge computing, cloudlet, semantics, social network, green energy.",
"title": ""
},
{
"docid": "e38de0af51d80544e4df84d36a40eb7b",
"text": "In the cerebral cortex, the activity levels of neuronal populations are continuously fluctuating. When neuronal activity, as measured using functional MRI (fMRI), is temporally coherent across 2 populations, those populations are said to be functionally connected. Functional connectivity has previously been shown to correlate with structural (anatomical) connectivity patterns at an aggregate level. In the present study we investigate, with the aid of computational modeling, whether systems-level properties of functional networks—including their spatial statistics and their persistence across time—can be accounted for by properties of the underlying anatomical network. We measured resting state functional connectivity (using fMRI) and structural connectivity (using diffusion spectrum imaging tractography) in the same individuals at high resolution. Structural connectivity then provided the couplings for a model of macroscopic cortical dynamics. In both model and data, we observed (i) that strong functional connections commonly exist between regions with no direct structural connection, rendering the inference of structural connectivity from functional connectivity impractical; (ii) that indirect connections and interregional distance accounted for some of the variance in functional connectivity that was unexplained by direct structural connectivity; and (iii) that resting-state functional connectivity exhibits variability within and across both scanning sessions and model runs. These empirical and modeling results demonstrate that although resting state functional connectivity is variable and is frequently present between regions without direct structural linkage, its strength, persistence, and spatial statistics are nevertheless constrained by the large-scale anatomical structure of the human cerebral cortex.",
"title": ""
},
{
"docid": "17d0da8dd05d5cfb79a5f4de4449fcdd",
"text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in",
"title": ""
},
{
"docid": "1e50abe2821e6dad2e8ede1a163e8cc8",
"text": "In vitro dissolution/release tests are an important tool in the drug product development phase as well as in its quality control and the regulatory approval process. Mucosal drug delivery systems are aimed to provide both local and systemic drug action via mucosal surfaces of the body and exhibit significant differences in formulation design, as well as in their physicochemical and release characteristics. Therefore it is not possible to devise a single test system which would be suitable for release testing of such complex dosage forms. This article is aimed to provide a comprehensive review of both compendial and noncompendial methods used for in vitro dissolution/release testing of novel mucosal drug delivery systems aimed for ocular, nasal, oromucosal, vaginal and rectal administration.",
"title": ""
},
{
"docid": "30e798ef3668df14f1625d40c53011a0",
"text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "425d927136ad3fc0f967ea8e64d8f209",
"text": "UNLABELLED\nThere is a clear need for brief, but sensitive and specific, cognitive screening instruments as evidenced by the popularity of the Addenbrooke's Cognitive Examination (ACE).\n\n\nOBJECTIVES\nWe aimed to validate an improved revision (the ACE-R) which incorporates five sub-domain scores (orientation/attention, memory, verbal fluency, language and visuo-spatial).\n\n\nMETHODS\nStandard tests for evaluating dementia screening tests were applied. A total of 241 subjects participated in this study (Alzheimer's disease=67, frontotemporal dementia=55, dementia of Lewy Bodies=20; mild cognitive impairment-MCI=36; controls=63).\n\n\nRESULTS\nReliability of the ACE-R was very good (alpha coefficient=0.8). Correlation with the Clinical Dementia Scale was significant (r=-0.321, p<0.001). Two cut-offs were defined (88: sensitivity=0.94, specificity=0.89; 82: sensitivity=0.84, specificity=1.0). Likelihood ratios of dementia were generated for scores between 88 and 82: at a cut-off of 82 the likelihood of dementia is 100:1. A comparison of individual age and education matched groups of MCI, AD and controls placed the MCI group performance between controls and AD and revealed MCI patients to be impaired in areas other than memory (attention/orientation, verbal fluency and language).\n\n\nCONCLUSIONS\nThe ACE-R accomplishes standards of a valid dementia screening test, sensitive to early cognitive dysfunction.",
"title": ""
},
{
"docid": "422183692a08138189271d4d7af407c7",
"text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.",
"title": ""
}
] |
scidocsrr
|
c4ff1bae68e8e1d9cde109c65924ede6
|
Enhancing CNN Incremental Learning Capability with an Expanded Network
|
[
{
"docid": "7d112c344167add5749ab54de184e224",
"text": "Since Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition with the brilliant deep convolutional neural networks (D-CNNs), researchers have designed lots of D-CNNs. However, almost all the existing very deep convolutional neural networks are trained on the giant ImageNet datasets. Small datasets like CIFAR-10 has rarely taken advantage of the power of depth since deep models are easy to overfit. In this paper, we proposed a modified VGG-16 network and used this model to fit CIFAR-10. By adding stronger regularizer and using Batch Normalization, we achieved 8.45% error rate on CIFAR-10 without severe overfitting. Our results show that the very deep CNN can be used to fit small datasets with simple and proper modifications and don't need to re-design specific small networks. We believe that if a model is strong enough to fit a large dataset, it can also fit a small one.",
"title": ""
},
{
"docid": "9b1874fb7e440ad806aa1da03f9feceb",
"text": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called Deep Adaptation Modules (DAM) that constrains newly learned filters to be linear combinations of existing ones. DAMs precisely preserve performance on the original domain, require a fraction (typically 13%, dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.",
"title": ""
},
{
"docid": "5092b52243788c4f4e0c53e7556ed9de",
"text": "This work attempts to address two fundamental questions about the structure of the convolutional neural networks (CNN): 1) why a nonlinear activation function is essential at the filter output of all intermediate layers? 2) what is the advantage of the two-layer cascade system over the one-layer system? A mathematical model called the “REctified-COrrelations on a Sphere” (RECOS) is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns (or the spectral components). The necessity of rectification is explained using the RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example.",
"title": ""
}
] |
[
{
"docid": "6b1f584a5665bda68a5215de5aed2fc7",
"text": "Most semi-supervised learning models propagate the labels over the Laplacian graph, where the graph should be built beforehand. However, the computational cost of constructing the Laplacian graph matrix is very high. On the other hand, when we do classification, data points lying around the decision boundary (boundary points) are noisy for learning the correct classifier and deteriorate the classification performance. To address these two challenges, in this paper, we propose an adaptive semi-supervised learning model. Different from previous semi-supervised learning approaches, our new model needn't construct the graph Laplacian matrix. Thus, our method avoids the huge computational cost required by previous methods, and achieves a computational complexity linear to the number of data points. Therefore, our method is scalable to large-scale data. Moreover, the proposed model adaptively suppresses the weights of boundary points, such that our new model is robust to the boundary points. An efficient algorithm is derived to alternatively optimize the model parameter and class probability distribution of the unlabeled data, such that the induction of classifier and the transduction of labels are adaptively unified into one framework. Extensive experimental results on six real-world data sets show that the proposed semi-supervised learning model outperforms other related methods in most cases.",
"title": ""
},
{
"docid": "7a9b9633243d84978d9e975744642e18",
"text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].",
"title": ""
},
{
"docid": "e913d5a0d898df3db28b97b27757b889",
"text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.",
"title": ""
},
{
"docid": "523a1bc4ac20bd0bbabd85a8eea66c5b",
"text": "Crime is a major social problem in the United States, threatening public safety and disrupting the economy. Understanding patterns in criminal activity allows for the prediction of future high-risk crime “hot spots” and enables police precincts to more effectively allocate officers to prevent or respond to incidents. With the ever-increasing ability of states and organizations to collect and store detailed data tracking crime occurrence, a significant amount of data with spatial and temporal information has been collected. How to use the benefit of massive spatial-temporal information to precisely predict the regional crime rates becomes necessary. The recurrent neural network model has been widely proven effective for detecting the temporal patterns in a time series. In this study, we propose the Spatio-Temporal neural network (STNN) to precisely forecast crime hot spots with embedding spatial information. We evaluate the model using call-for-service data provided by the Portland, Oregon Police Bureau (PPB) for a 5-year period from March 2012 through the end of December 2016. We show that our STNN model outperforms a number of classical machine learning approaches and some alternative neural network architectures.",
"title": ""
},
{
"docid": "aae743c3254352ff973dcb8fbff55299",
"text": "Software Defined Radar is the latest trend in radar development. To handle enhanced radar signal processing techniques, advanced radars need to be able of generating various types of waveforms, such as frequency modulated or phase coded, and to perform multiple functions. The adoption of a Software Defined Radio system makes easier all these abilities. In this work, the implementation of a Software Defined Radar system for target tracking using the Universal Software Radio Peripheral platform is discussed. For the first time, an experimental characterization in terms of radar application is performed on the latest Universal Software Radio Peripheral NI2920, demonstrating a strongly improved target resolution with respect to the first generation platform.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3e2efde80890e469684a41287833eb6",
"text": "Recent work has suggested reducing electricity generation cost by cutting the peak to average ratio (PAR) without reducing the total amount of the loads. However, most of these proposals rely on consumer's willingness to act. In this paper, we propose an approach to cut PAR explicitly from the supply side. The resulting cut loads are then distributed among consumers by the means of a multiunit auction which is done by an intelligent agent on behalf of the consumer. This approach is also in line with the future vision of the smart grid to have the demand side matched with the supply side. Experiments suggest that our approach reduces overall system cost and gives benefit to both consumers and the energy provider.",
"title": ""
},
{
"docid": "4a8448ab4c1c9e0a1df5e2d1c1d20417",
"text": "We present an empirical framework for testing game strategies in The Settlers of Catan, a complex win-lose game that lacks any analytic solution. This framework provides the means to change different components of an autonomous agent's strategy, and to test them in suitably controlled ways via performance metrics in game simulations and via comparisons of the agent's behaviours with those exhibited in a corpus of humans playing the game. We provide changes to the game strategy that not only improve the agent's strength, but corpus analysis shows that they also bring the agent closer to a model of human players.",
"title": ""
},
{
"docid": "065eb4ca2fbef1a8d0d4029b178a0c98",
"text": "Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in its early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for the highly equipped environment. The recent advancements in computerized solutions for this diagnosis are highly promising with improved accuracy and efficiency. In this article, a method for the identification and classification of the lesion based on probabilistic distribution and best features selection is proposed. The probabilistic distribution such as normal distribution and uniform distribution are implemented for segmentation of lesion in the dermoscopic images. Then multi-level features are extracted and parallel strategy is performed for fusion. A novel entropy-based method with the combination of Bhattacharyya distance and variance are calculated for the selection of best features. Only selected features are classified using multi-class support vector machine, which is selected as a base classifier. The proposed method is validated on three publicly available datasets such as PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), including multi-resolution RGB images and achieved accuracy of 97.5%, 97.75%, and 93.2%, respectively. The base classifier performs significantly better on proposed features fusion and selection method as compared to other methods in terms of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.",
"title": ""
},
{
"docid": "31dfedb06716502fcf33871248fd7e9e",
"text": "Multi-sensor precipitation datasets including two products from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and estimates from Climate Prediction Center Morphing Technique (CMORPH) product were quantitatively evaluated to study the monsoon variability over Pakistan. Several statistical and graphical techniques are applied to illustrate the nonconformity of the three satellite products from the gauge observations. During the monsoon season (JAS), the three satellite precipitation products captures the intense precipitation well, all showing high correlation for high rain rates (>30 mm/day). The spatial and temporal satellite rainfall error variability shows a significant geo-topography dependent distribution, as all the three products overestimate over mountain ranges in the north and coastal region in the south parts of Indus basin. The TMPA-RT product tends to overestimate light rain rates (approximately 100%) and the bias is low for high rain rates (about ±20%). In general, daily comparisons from 2005 to 2010 show the best agreement between the TMPA-V7 research product and gauge observations with correlation coefficient values ranging from moderate (0.4) to high (0.8) over the spatial domain of Pakistan. The seasonal variation of rainfall frequency has large biases (100–140%) over high latitudes (36N) with complex terrain for daily, monsoon, and pre-monsoon comparisons. Relatively low uncertainties and errors (Bias ±25% and MAE 1–10 mm) were associated with the TMPA-RT product during the monsoon-dominated region (32–35N), thus demonstrating their potential use for developing an operational hydrological application of the satellite-based near real-time products in Pakistan for flood monitoring. 2014 COSPAR. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "db9ff7ade6b863707bf595e2b866745b",
"text": "Pneumatic devices require tight tolerances to keep them leak-free. Specialized companies offer various off-the-shelf devices, while these work well for many applications, there are also situations where custom design and production of pneumatic parts are desired. Cost efficiency, design flexibility, rapid prototyping, and MRI compatibility requirements are reasons why we investigated a method to design and produce different pneumatic devices using a laser cutter from acrylic, acetal, and rubber-like materials. The properties of the developed valves, pneumatic cylinders, and stepper motors were investigated. At 4-bar working pressure, the 4/3-way valves are capable of 5-Hz switching frequency and provide at most 22-L/min airflow. The pneumatic cylinder delivers 48 N of force, the acrylic stepper motor 30 N. The maximum switching frequency over 6-m long transmission lines is 4.5 Hz, using 2-mm tubing. A MRI-compatible robotic biopsy system driven by the pneumatic stepper motors is also demonstrated. We have shown that it is possible to construct pneumatic devices using laser-cutting techniques. This way, plastic MRI-compatible cylinders, stepper motors, and valves can be developed. Provided that a laser-cutting machine is available, the described pneumatic devices can be fabricated within hours at relatively low cost, making it suitable for rapid prototyping applications.",
"title": ""
},
{
"docid": "d9366c0456eedecd396a9aa1dbc31e35",
"text": "A connectionist model is presented, the TraceLink model, that implements an autonomous \"off-line\" consolidation process. The model consists of three subsystems: (1) a trace system (neocortex), (2) a link system (hippocampus and adjacent regions), and (3) a modulatory system (basal forebrain and other areas). The model is able to account for many of the characteristics of anterograde and retrograde amnesia, including Ribot gradients, transient global amnesia, patterns of shrinkage of retrograde amnesia, and correlations between anterograde and retrograde amnesia or the absence thereof (e.g., in isolated retrograde amnesia). In addition, it produces normal forgetting curves and can exhibit permastore. It also offers an explanation for the advantages of learning under high arousal for long-term retention.",
"title": ""
},
{
"docid": "15ba6a0a5ce45fbecf33bff5d2194250",
"text": "Recently, pathological diagnosis plays a crucial role in many areas of medicine, and some researchers have proposed many models and algorithms for improving classification accuracy by extracting excellent feature or modifying the classifier. They have also achieved excellent results on pathological diagnosis using tongue images. However, pixel values can't express intuitive features of tongue images and different classifiers for training samples have different adaptability. Accordingly, this paper presents a robust approach to infer the pathological characteristics by observing tongue images. Our proposed method makes full use of the local information and similarity of tongue images. Firstly, tongue images in RGB color space are converted to Lab. Then, we compute tongue statistics information. In the calculation process, Lab space dictionary is created at first, through it, we compute statistic value for each dictionary value. After that, a method based on Doublets is taken for feature optimization. At last, we use XGBOOST classifier to predict the categories of tongue images. We achieve classification accuracy of 95.39% using statistics feature and the improved classifier, which is helpful for TCM (Traditional Chinese Medicine) diagnosis.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "70a970138428aeb06c139abb893a56a9",
"text": "Two sequentially rotated, four stage, wideband circularly polarized high gain microstrip patch array antennas at Ku-band are investigated and compared by incorporating both unequal and equal power division based feeding networks. Four stages of sequential rotation is used to create 16×16 patch array which provides wider common bandwidth between the impedance matching (S11 < −10dB), 3dB axial ratio and 3dB gain of 12.3% for the equal power divider based feed array and 13.2% for the unequal power divider based feed array in addition to high polarization purity. The high peak gain of 28.5dBic is obtained for the unequal power division feed based array antennas compared to 26.8dBic peak gain in the case of the equal power division based feed array antennas. The additional comparison between two feed networks based arrays reveals that the unequal power divider based array antennas provide better array characteristics than the equal power divider based feed array antennas.",
"title": ""
},
{
"docid": "ae43fc77cfe3e88f00a519744407eed7",
"text": "In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.",
"title": ""
},
{
"docid": "5de07054546347e150aeabe675234966",
"text": "Smart farming is seen to be the future of agriculture as it produces higher quality of crops by making farms more intelligent in sensing its controlling parameters. Analyzing massive amount of data can be done by accessing and connecting various devices with the help of Internet of Things (IoT). However, it is not enough to have an Internet support and self-updating readings from the sensors but also to have a self-sustainable agricultural production with the use of analytics for the data to be useful. This study developed a smart hydroponics system that is used in automating the growing process of the crops using exact inference in Bayesian Network (BN). Sensors and actuators are installed in order to monitor and control the physical events such as light intensity, pH, electrical conductivity, water temperature, and relative humidity. The sensor values gathered were used to build the Bayesian Network in order to infer the optimum value for each parameter. A web interface is developed wherein the user can monitor and control the farm remotely via the Internet. Results have shown that the fluctuations in terms of the sensor values were minimized in the automatic control using BN as compared to the manual control. The yielded crop on the automatic control was 66.67% higher than the manual control which implies that the use of exact inference in BN aids in producing high-quality crops. In the future, the system can use higher data analytics and longer data gathering to improve the accuracy of inference.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "4768b338044e38949f50c5856bc1a07c",
"text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.",
"title": ""
}
] |
scidocsrr
|
b865905fd2e1ec70274a97c1f9722c99
|
On Efficiency and Scalability of Software-Defined Infrastructure for Adaptive Applications
|
[
{
"docid": "5cc26542d0f4602b2b257e19443839b3",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
}
] |
[
{
"docid": "ef6adbe1c2a0863eb6447cebffaaf0fe",
"text": "How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.",
"title": ""
},
{
"docid": "ad860674746dcf04156b3576174a9120",
"text": "Predicting the popularity dynamics of Twitter hashtags has a broad spectrum of applications. Existing works have primarily focused on modeling the popularity of individual tweets rather than the underlying hashtags. As a result, they fail to consider several realistic factors contributing to hashtag popularity. In this paper, we propose Large Margin Point Process (LMPP), a probabilistic framework that integrates hashtag-tweet influence and hashtaghashtag competitions, the two factors which play important roles in hashtag propagation. Furthermore, while considering the hashtag competitions, LMPP looks into the variations of popularity rankings of the competing hashtags across time. Extensive experiments on seven real datasets demonstrate that LMPP outperforms existing popularity prediction approaches by a significant margin. Additionally, LMPP can accurately predict the relative rankings of competing hashtags, offering additional advantage over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "40c16b5db17fa31a1bdae7e66a297ea7",
"text": "Code smells, i.e., symptoms of poor design and implementation choices applied by programmers during the development of a software project [2], represent an important factor contributing to technical debt [3]. The research community spent a lot of effort studying the extent to which code smells tend to remain in a software project for long periods of time [9], as well as their negative impact on non-functional properties of source code [4, 7]. As a consequence, several tools and techniques have been proposed to help developers in detecting code smells and to suggest refactoring opportunities (e.g., [5, 6, 8]).\n So far, almost all detectors identify code smells using structural properties of source code. However, recent studies have indicated that code smells detected by existing tools are generally ignored (and thus not refactored) by the developers [1]. A possible reason is that developers do not perceive the code smells identified by the tool as actual design problems or, if they do, they are not able to practically work on such code smells. In other words, there is misalignment between what is considered smelly by the tool and what is actually refactorable by developers.\n In a previous paper [6], we introduced a tool named TACO that uses textual analysis to detect code smells. The results indicated that textual and structural techniques are complementary: while some code smell instances in a software system can be correctly identified by both TACO and the alternative structural approaches, other instances can be only detected by one of the two [6].\n In this paper, we investigate whether code smells detected using textual information are as difficult to identify and refactor as structural smells or if they follow a different pattern during software evolution. We firstly performed a repository mining study considering 301 releases and 183,514 commits from 20 open source projects (i) to verify whether textually and structurally detected code smells are treated differently, and (ii) to analyze their likelihood of being resolved with regards to different types of code changes, e.g., refactoring operations. Since our quantitative study cannot explain relation and causation between code smell types and maintenance activities, we perform a qualitative study with 19 industrial developers and 5 software quality experts in order to understand (i) how code smells identified using different sources of information are perceived, and (ii) whether textually or structurally detected code smells are easier to refactor. In both studies, we focused on five code smell types, i.e., Blob, Feature Envy, Long Method, Misplaced Class, and Promiscuous Package.\n The results of our studies indicate that textually detected code smells are perceived as harmful as the structural ones, even though they do not exceed any typical software metrics' value (e.g., lines of code in a method). Moreover, design problems in source code affected by textual-based code smells are easier to identify and refactor. As a consequence, developers' activities tend to decrease the intensity of textual code smells, positively impacting their likelihood of being resolved. Vice versa, structural code smells typically increase in intensity over time, indicating that maintenance operations are not aimed at removing or limiting them. Indeed, while developers perceive source code affected by structural-based code smells as harmful, they face more problems in correctly identifying the actual design problems affecting these code components and/or the right refactoring operation to apply to remove them.",
"title": ""
},
{
"docid": "0e1cc3ddf39c9fff13894cf1d924c8cc",
"text": "This paper introduces NSGA-Net, an evolutionary approach for neural architecture search (NAS). NSGA-Net is designed with three goals in mind: (1) a NAS procedure for multiple, possibly conflicting, objectives, (2) efficient exploration and exploitation of the space of potential neural network architectures, and (3) output of a diverse set of network architectures spanning a trade-off frontier of the objectives in a single run. NSGA-Net is a population-based search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures and finally an exploitation step that applies the entire history of evaluated neural architectures in the form of a Bayesian Network prior. Experimental results suggest that combining the objectives of minimizing both an error metric and computational complexity, as measured by FLOPS, allows NSGA-Net to find competitive neural architectures near the Pareto front of both objectives on two different tasks, object classification and object alignment. NSGA-Net obtains networks that achieve 3.72% (at 4.5 million FLOP) error on CIFAR-10 classification and 8.64% (at 26.6 million FLOP) error on the CMU-Car alignment task. Code available at: https://github.com/ianwhale/nsga-net.",
"title": ""
},
{
"docid": "f7a42937973a45ed4fb5d23e3be316a9",
"text": "Domain specific information retrieval process has been a prominent and ongoing research in the field of natural language processing. Many researchers have incorporated different techniques to overcome the technical and domain specificity and provide a mature model for various domains of interest. The main bottleneck in these studies is the heavy coupling of domain experts, that makes the entire process to be time consuming and cumbersome. In this study, we have developed three novel models which are compared against a golden standard generated via the on line repositories provided, specifically for the legal domain. The three different models incorporated vector space representations of the legal domain, where document vector generation was done in two different mechanisms and as an ensemble of the above two. This study contains the research being carried out in the process of representing legal case documents into different vector spaces, whilst incorporating semantic word measures and natural language processing techniques. The ensemble model built in this study, shows a significantly higher accuracy level, which indeed proves the need for incorporation of domain specific semantic similarity measures into the information retrieval process. This study also shows, the impact of varying distribution of the word similarity measures, against varying document vector dimensions, which can lead to improvements in the process of legal information retrieval. keywords: Document Embedding, Deep Learning, Information Retrieval",
"title": ""
},
{
"docid": "fd2b1d2a4d44f0535ceb6602869ffe1c",
"text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.",
"title": ""
},
{
"docid": "31346876446c21b92f088b852c0201b2",
"text": "In this paper, the closed-form design method of an Nway dual-band Wilkinson hybrid power divider is proposed. This symmetric structure including N groups of two sections of transmission lines and two isolated resistors is described which can split a signal into N equiphase equiamplitude parts at two arbitrary frequencies (dual-band) simultaneously, where N can be odd or even. Based on the rigorous evenand odd-mode analysis, the closed-form design equations are derived. For verification, various numerical examples are designed, calculated and compared while two practical examples including two ways and three ways dual-band microstrip power dividers are fabricated and measured. It is very interesting that this generalized power divider with analytical design equations can be designed for wideband applications when the frequency-ratio is relatively small. In addition, it is found that the conventional N-way hybrid Wilkinson power divider for single-band applications is a special case (the frequency-ratio equals to 3) of this generalized power divider.",
"title": ""
},
{
"docid": "26e90d8dca906c2e7dd023441ba4438a",
"text": "In this paper, we show that the handedness of a planar chiral checkerboard-like metasurface can be dynamically switched by modulating the local sheet impedance of the metasurface structure. We propose a metasurface design to realize the handedness switching and theoretically analyze its electromagnetic characteristic based on Babinet’s principle. Numerical simulations of the proposed metasurface are performed to validate the theoretical analysis. It is demonstrated that the polarity of asymmetric transmission for circularly polarized waves, which is determined by the planar chirality of the metasurface, is inverted by switching the sheet impedance at the interconnection points of the checkerboard-like structure. The physical origin of the asymmetric transmission is also discussed in terms of the surface current and charge distributions on the metasurface.",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "5f66a3faa36f273831b13b4345c2bf15",
"text": "The structure of blood vessels in the sclerathe white part of the human eye, is unique for every individual, hence it is best suited for human identification. However, this is a challenging research because it has a high insult rate (the number of occasions the valid user is rejected). In this survey firstly a brief introduction is presented about the sclera based biometric authentication. In addition, a literature survey is presented. We have proposed simplified method for sclera segmentation, a new method for sclera pattern enhancement based on histogram equalization and line descriptor based feature extraction and pattern matching with the help of matching score between the two segment descriptors. We attempt to increase the awareness about this topic, as much of the research is not done in this area.",
"title": ""
},
{
"docid": "c5cb0ae3102fcae584e666a1ba3e73ed",
"text": "A new generation of computational cameras is emerging, spawned by the introduction of the Lytro light-field camera to the consumer market and recent accomplishments in the speed at which light can be captured. By exploiting the co-design of camera optics and computational processing, these cameras capture unprecedented details of the plenoptic function: a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, the visual information captured by conventional cameras has remained almost unchanged since the invention of the daguerreotype. All standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons. In the process, all visual information is irreversibly lost, except for a two-dimensional, spatially varying subset: the common photograph.\n This course reviews the plenoptic function and discusses approaches for optically encoding high-dimensional visual information that is then recovered computationally in post-processing. It begins with an overview of the plenoptic dimensions and shows how much of this visual information is irreversibly lost in conventional image acquisition. Then it discusses the state of the art in joint optical modulation and computation reconstruction for acquisition of high-dynamic-range imagery and spectral information. It unveils the secrets behind imaging techniques that have recently been featured in the news and outlines other aspects of light that are of interest for various applications before concluding with question, answers, and a short discussion.",
"title": ""
},
{
"docid": "9570975ee04cd1fc689a00b4499c22fc",
"text": "Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity. Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), [1][2] which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers [3] that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network. This paper discusses approaches and environments for carrying out analytics on Clouds for Big Data applications. It revolves around four important areas of analytics and Big Data, namely (i) data management and supporting architectures; (ii) model development and scoring; (iii) visualisation and user interaction; and (iv) business models. Through a detailed survey, we identify possible gaps in technology and provide recommendations for the research community on future directions on Cloud-supported Big Data computing and analytics solutions.",
"title": ""
},
{
"docid": "26992fcd5b560f11eb388d27d51527e9",
"text": "The concept of digital twin, a kind of virtual things with the precise states of the corresponding physical systems, is suggested by industrial domains to accurately estimate the status and predict the operation of machines. Digital twin can be used for development of critical systems, such as self-driving cars and auto-production factories. There, however, will be so different digital twins in terms of resolution, complexity, modelling languages and formats. It is required to cooperate heterogeneous digital twins in standardized ways. Since a centralized digital twin system uses too big resources and energies, it is preferable to make large-scale digital twin system geographically and logically distributed over the Internet. In addition, efficient interworking functions between digital twins and the physical systems are required also. In this paper, we propose a novel architecture of large-scale digital twin platform including distributed digital twin cooperation framework, flexible data-centric communication middleware, and the platform based digital twin application to develop a reliable advanced driver assistance system.",
"title": ""
},
{
"docid": "34acaf35585fe19fea86f6f3c8aa8a0f",
"text": "This paper is concerned with deep reinforcement learning (deep RL) in continuous state and action space. It proposes a new method that can drastically speed up RL training for problems that have the property of state-action permissibility (SAP). This property says that after an action at is performed in a state st and the agent reaches the new state st+1, the agent can decide whether the action at is permissible or not permissible in state st . An action is not permissible in a state if the action can never lead to an optimal solution and thus should not have been tried. We incorporate the proposed method into a state-of-the-art deep RL algorithm to guide its training and apply it to solve the lane keeping (steering control) problem in self-driving or autonomous driving. It is shown that the proposed method can help speedup RL training markedly for the lane keeping task as compared to the RL algorithm without exploiting the SAP-based guidance and other baselines that employ constrained action space exploration strategies.",
"title": ""
},
{
"docid": "532d5655281bf409dd6a44c1f875cd88",
"text": "BACKGROUND\nOlder adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.\n\n\nOBJECTIVE\nThe purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.\n\n\nMETHODS\nOne wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).\n\n\nRESULTS\nAfter controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).\n\n\nCONCLUSIONS\nUsing the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "ed05b17a9d8a3e330b098a7b0b0dcd34",
"text": "Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.",
"title": ""
},
{
"docid": "149fa8c20c5656373930474237337b21",
"text": "OBJECTIVES: To compare the predictive value of pH, base deficit and lactate for the occurrence of moderate-to-severe hypoxic ischaemic encephalopathy (HIE) and systemic complications of asphyxia in term infants with intrapartum asphyxia.STUDY DESIGN: We retrospectively reviewed the records of 61 full-term neonates (≥37 weeks gestation) suspected of having suffered from a significant degree of intrapartum asphyxia from a period of January 1997 to December 2001.The clinical signs of HIE, if any, were categorized using Sarnat and Sarnat classification as mild (stage 1), moderate (stage 2) or severe (stage 3). Base deficit, pH and plasma lactate levels were measured from indwelling arterial catheters within 1 hour after birth and thereafter alongwith every blood gas measurement. The results were correlated with the subsequent presence or absence of moderate-to-severe HIE by computing receiver operating characteristic curves.RESULTS: The initial lactate levels were significantly higher (p=0.001) in neonates with moderate-to-severe HIE (mean±SD=11.09±4.6) as compared to those with mild or no HIE (mean±SD=7.1±4.7). Also, the lactate levels took longer to normalize in these babies. A plasma lactate concentration >7.5±mmol/l was associated with moderate-or-severe HIE with a sensitivity of 94% and specificity of 67%. The sensitivity and negative predictive value of lactate was greater than that of the pH or base deficit.CONCLUSIONS: The highest recorded lactate level in the first hour of life and serial measurements of lactate are important predictors of moderate-to-severe HIE.",
"title": ""
},
{
"docid": "2d05142e12f63a354ec0c48436cd3697",
"text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik",
"title": ""
}
] |
scidocsrr
|
84ccd2ad9d82da02eecfcea23401f585
|
Learning of Coordination Policies for Robotic Swarms
|
[
{
"docid": "1847cce79f842a7d01f1f65721c1f007",
"text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.",
"title": ""
}
] |
[
{
"docid": "97a13a2a11db1b67230ab1047a43e1d6",
"text": "Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.",
"title": ""
},
{
"docid": "46a4e4dbcb9b6656414420a908b51cc5",
"text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.",
"title": ""
},
{
"docid": "2b3335d6fb1469c4848a201115a78e2c",
"text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.",
"title": ""
},
{
"docid": "e561ff9b3f836c0d005db1ffdacd6f56",
"text": "A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web’s false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web.",
"title": ""
},
{
"docid": "b759613b1eedd29d32fbbc118767b515",
"text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.",
"title": ""
},
{
"docid": "473d8cbcd597c961819c5be6ab2e658e",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "dc8f5af4c7681fa2065a11c26cf05e2b",
"text": "Bitcoin is the first e-cash system to see widespread adoption. While Bitcoin offers the potential for new types of financial interaction, it has significant limitations regarding privacy. Specifically, because the Bitcoin transaction log is completely public, users' privacy is protected only through the use of pseudonyms. In this paper we propose Zerocoin, a cryptographic extension to Bitcoin that augments the protocol to allow for fully anonymous currency transactions. Our system uses standard cryptographic assumptions and does not introduce new trusted parties or otherwise change the security model of Bitcoin. We detail Zerocoin's cryptographic construction, its integration into Bitcoin, and examine its performance both in terms of computation and impact on the Bitcoin protocol.",
"title": ""
},
{
"docid": "c4caa735537ccd82c83a330fa85e142d",
"text": "We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. To this end, we introduce a new way to fuse modality-specific product embeddings into a joint product embedding, in order to leverage both product content information, such as textual descriptions and images, and product collaborative filtering signal. By introducing the fusion step at the very end of our architecture, we are able to train each modality separately, allowing us to keep a modular architecture that is preferable in real-world recommendation deployments. We analyze our performance on normal and hard recommendation setups such as cold-start and cross-category recommendations and achieve good performance on a large product shopping dataset.",
"title": ""
},
{
"docid": "8b3a58dc4f3aceae7723c17895775a1a",
"text": "While the technology acceptance model (TAM), introduced in 1986, continues to be the most widely applied theoretical model in the IS field, few previous efforts examined its accomplishments and limitations. This study traces TAM’s history, investigates its findings, and cautiously predicts its future trajectory. One hundred and one articles published by leading IS journals and conferences in the past eighteen years are examined and summarized. An openended survey of thirty-two leading IS researchers assisted in critically examining TAM and specifying future directions.",
"title": ""
},
{
"docid": "4107e9288ea64d039211acf48a091577",
"text": "The trisomy 18 syndrome can result from a full, mosaic, or partial trisomy 18. The main clinical findings of full trisomy 18 consist of prenatal and postnatal growth deficiency, characteristic facial features, clenched hands with overriding fingers and nail hypoplasia, short sternum, short hallux, major malformations, especially of the heart, andprofound intellectual disability in the surviving older children. The phenotype of partial trisomy 18 is extremely variable. The aim of this article is to systematically review the scientific literature on patients with partial trisomy 18 in order to identify regions of chromosome 18 that may be responsible for the specific clinical features of the trisomy 18 syndrome. We confirmed that trisomy of the short arm of chromosome 18 does not seem to cause the major features. However, we found candidate regions on the long arm of chromosome 18 for some of the characteristic clinical features, and a thus a phenotypic map is proposed. Our findings confirm the hypothesis that single critical regions/candidate genes are likely to be responsible for specific characteristics of the syndrome, while a single critical region for the whole Edwards syndrome phenotype is unlikely to exist.",
"title": ""
},
{
"docid": "a7ac6803295b7359f5c8c0fcdd26e0e7",
"text": "The Internet of Things (IoT), the idea of getting real-world objects connected with each other, will change the way users organize, obtain and consume information radically. Internet of Things (IoT) enables various applications (crop growth monitoring and selection, irrigation decision support, etc.) in Digital Agriculture domain. The Wireless Sensors Network (WSN) is widely used to build decision support systems. These systems overcomes many problems in the real-world. One of the most interesting fields having an increasing need of decision support systems is Precision Agriculture (PA). Through sensor networks, agriculture can be connected to the IoT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of this approach which provides real-time information about the lands and crops that will help farmers make right decisions. The major advantage is implementation of WSN in Precision Agriculture (PA) will optimize the usage of water fertilizers while maximizing the yield of the crops and also will help in analyzing the weather conditions of the field.",
"title": ""
},
{
"docid": "1d0241833add973cc7cf6117735b7a1a",
"text": "This paper describes the conception and the construction of a low cost spin coating machine incorporating inexpensive electronic components and open-source technology based on Arduino platform. We present and discuss the details of the electrical, mechanical and control parts. This system will coat thin film in a micro level thickness and the microcontroller ATM 328 circuit controls and adjusts the spinning speed. We prepare thin films with good uniformity for various thicknesses by this spin coating system. The thickness and uniformity of deposited films were verified by determining electronic absorption spectra. We show that thin film thickness depends on the spin speed in the range of 2000–3500 rpm. We compare the results obtained on TiO2 layers deposited by our developed system to those grown by using a standard commercial spin coating systems.",
"title": ""
},
{
"docid": "d6c95e47caf4e01fa5934b861a962f6e",
"text": "Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. Answering these questions is important if learning in deep architectures is to be further improved. We attempt to shed some light on these questions through extensive simulations. The experiments confirm and clarify the advantage of unsupervised pre-training. They demonstrate the robustness of the training procedure with respect to the random initialization, the positive effect of pre-training in terms of optimization and its role as a regularizer. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "da33a718aa9dbf6e9feaff5e63765639",
"text": " This paper introduces a new frequency-domain approach to describe the relationships (direction of information flow) between multivariate time series based on the decomposition of multivariate partial coherences computed from multivariate autoregressive models. We discuss its application and compare its performance to other approaches to the problem of determining neural structure relations from the simultaneous measurement of neural electrophysiological signals. The new concept is shown to reflect a frequency-domain representation of the concept of Granger causality.",
"title": ""
},
{
"docid": "9a4e9c73465d1026c2f5c91ec17eaf74",
"text": "Devising an expressive question taxonomy is a central problem in question generation. Through examination of a corpus of human-human taskoriented tutoring, we have found that existing question taxonomies do not capture all of the tutorial questions present in this form of tutoring. We propose a hierarchical question classification scheme for tutorial questions in which the top level corresponds to the tutor’s goal and the second level corresponds to the question type. The application of this hierarchical classification scheme to a corpus of keyboard-to-keyboard tutoring of introductory computer science yielded high inter-rater reliability, suggesting that such a scheme is appropriate for classifying tutor questions in design-oriented tutoring. We discuss numerous open issues that are highlighted by the current analysis.",
"title": ""
},
{
"docid": "db2e7cc9ea3d58e0c625684248e2ef80",
"text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.",
"title": ""
},
{
"docid": "4630ade03760cb8ec1da11b16703b3f1",
"text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
}
] |
scidocsrr
|
d4306bb0059d1418f0cb09241742f867
|
Enterprise Architecture Management Patterns for Enterprise Architecture Visioning
|
[
{
"docid": "73fdbdbff06b57195cde51ab5135ccbe",
"text": "1 Abstract This paper describes five widely-applicable business strategy patterns. The initiate patterns where inspired Michael Porter's work on competitive strategy (1980). By applying the pattern form we are able to explore the strategies and consequences in a fresh light. The patterns form part of a larger endeavour to apply pattern thinking to the business domain. This endeavour seeks to map the business domain in patterns, this involves develop patterns, possibly based on existing literature, and mapping existing patterns into a coherent model of the business domain. If you find the paper interesting you might be interested in some more patterns that are currently (May 2005) in development. These describe in more detail how these strategies can be implemented: This paper is one of the most downloaded pieces on my website. I'd be interested to know more about who is downloading the paper, what use your making of it and any comments you have on it-allan@allankelly.net. Cost Leadership Build an organization that can produce your chosen product more cheaply than anyone else. You can then choose to undercut the opposition (and sell more) or sell at the same price (and make more profit per unit.) Differentiated Product Build a product that fulfils the same functions as your competitors but is clearly different, e.g. it is better quality, novel design, or carries a brand name. Customer will be prepared to pay more for your product than the competition. Market Focus You can't compete directly on cost or differentiation with the market leader; so, focus on a niche in the market. The niche will be smaller than the overall market (so sales will be lower) but the customer requirements will be different, serve these customers requirements better then the mass market and they will buy from you again and again. Sweet Spot Customers don't always want the best or the cheapest, so, produce a product that combines elements of differentiation with reasonable cost so you offer superior value. However, be careful, customer tastes",
"title": ""
}
] |
[
{
"docid": "3129b636e3739281ba59721765eeccb9",
"text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.",
"title": ""
},
{
"docid": "ae73f7c35c34050b87d8bf2bee81b620",
"text": "D esigning a complex Web site so that it readily yields its information is a difficult task. The designer must anticipate the users' needs and structure the site accordingly. Yet users may have vastly differing views of the site's information, their needs may change over time, and their usage patterns may violate the designer's initial expectations. As a result, Web sites are all too often fossils cast in HTML, while user navigation is idiosyncratic and evolving. Understanding user needs requires understanding how users view the data available and how they actually use the site. For a complex site this can be difficult since user tests are expensive and time-consuming, and the site's server logs contain massive amounts of data. We propose a Web management assistant: a system that can process massive amounts of data about site usage Examining the potential use of automated adaptation to improve Web sites for visitors.",
"title": ""
},
{
"docid": "251a47eb1a5307c5eba7372ce09ea641",
"text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.",
"title": ""
},
{
"docid": "33cab03ab9773efe22ba07dd461811ef",
"text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.",
"title": ""
},
{
"docid": "815fe60934f0313c56e631d73b998c95",
"text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.",
"title": ""
},
{
"docid": "0a340a2dc4d9a6acd90d3bedad07f84a",
"text": "BACKGROUND\nKhat (Catha edulis) contains a psychoactive substance, cathinone, which produces central nervous system stimulation analogous to amphetamine. It is believed that khat chewing has a negative impact on the physical and mental health of individuals as well as the socioeconomic condition of the family and the society at large. There is lack of community based studies regarding the link between khat use and poor mental health. The objective of this study was to evaluate the association between khat use and mental distress and to determine the prevalence of mental distress and khat use in Jimma City.\n\n\nMETHODS\nA cross-sectional community-based study was conducted in Jimma City from October 15 to November 15, 2009. The study used a structured questionnaire and Self Reporting Questionnaire-20 designed by WHO and which has been translated into Amharic and validated in Ethiopia. By multi stage sampling, 1200 individuals were included in the study. Data analysis was done using SPSS for window version 13.\n\n\nRESULTS\nThe Khat use prevalence was found to be 37.8% during the study period. Majority of the khat users were males (73.5%), age group 18-24 (41.1%), Muslims (46.6%), Oromo Ethnic group (47.2%), single (51.4%), high school students (46.8%) and employed (80%). Using cut-off point 7 out of 20 on the Self Reporting Questionnaire-20, 25.8% of the study population was found to have mental distress. Males (26.6%), persons older than 55 years (36.4%), Orthodox Christians (28.4%), Kefficho Ethnic groups (36.4%), widowed (44.8%), illiterates (43.8%) and farmers (40.0%) had higher rates of mental distress. We found that mental distress and khat use have significant association (34.7% Vs 20.5%, P<0.001). There was also significant association between mental distress and frequency of khat use (41% Vs 31.1%, P<0.001)\n\n\nCONCLUSION\nThe high rate of khat use among the young persons calls for public intervention to prevent more serious forms of substance use disorders. Our findings suggest that persons who use khat suffer from higher rates of mental distress. However, causal association could not be established due to cross-sectional study design.",
"title": ""
},
{
"docid": "e914a66fc4c5b35e3fd24427ffdcbd96",
"text": "This paper proposes two control algorithms for a sensorless speed control of a PMSM. One is a new low pass filter. This filter is designed to have the variable cutoff frequency according to the rotor speed. And the phase delay angle is so small as to be ignored not only in the low speed region but also in the high speed region including the field weakening region. Sensorless control of a PMSM can be guaranteed without any delay angle by using the proposed low pass filter. The other is a new iterative sliding mode observer (I-SMO). Generally the sliding mode observer (SMO) has the attractive features of the robustness to disturbances, and parameter variations. In the high speed region the switching gain of SMO must be large enough to operate the sliding mode stably. But the estimated currents and back EMF can not help having much ripple or chattering components especially in the high speed region including the flux weakening region. Using I-SMO can reduce chattering components of the estimated currents and back EMF in all speed regions without any help of the expensive hardware such as the high performance DSP and A/D converter. Experimental results show the usefulness of the proposed two algorithms for the sensorless drive system of a PMSM.",
"title": ""
},
{
"docid": "70a94ef8bf6750cdb4603b34f0f1f005",
"text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.",
"title": ""
},
{
"docid": "1bb694f68643eaf70e09ce086a77ea34",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this information security principles and practice by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "d4a96cc393a3f1ca3bca94a57e07941e",
"text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.",
"title": ""
},
{
"docid": "619165e7f74baf2a09271da789e724df",
"text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.",
"title": ""
},
{
"docid": "2e3f05ee44b276b51c1b449e4a62af94",
"text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.",
"title": ""
},
{
"docid": "04384b62c17f9ff323db4d51bea86fe9",
"text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.",
"title": ""
},
{
"docid": "8c658d7663f9849a0759160886fc5690",
"text": "The design and fabrication of a 76.5 GHz, planar, three beam antenna is presented. This antenna has greater than 31 dB of gain and sidelobes that are less than -29 dB below the main beam. This antenna demonstrates the ability to achieve very low sidelobes in a simple, compact, and planar structure. This is accomplished uniquely by feeding waveguide slots that are coupled to microstrip radiating elements. This illumination technique allows for a very low loss and highly efficient structure. Also, a novel beam-scanning concept is introduced. To orient a beam from bore sight it requires phase differences between the excitations of the successive elements. This is achieved by varying the width of the W-band waveguide. This simple, beam steering two-dimensional structure offers the advantage of easy manufacturing compared to present lens and alternative technologies.",
"title": ""
},
{
"docid": "eb861eed8718e227fc2615bb6fcf0841",
"text": "Immediate effects of verb-specific syntactic (subcategorization) information were found in a cross-modal naming experiment, a self-paced reading experiment, and an experiment in which eye movements were monitored. In the reading studies, syntactic misanalysis effects in sentence complements (e.g., \"The student forgot the solution was...\") occurred at the verb in the complement (e.g., was) for matrix verbs typically used with noun phrase complements but not for verbs typically used with sentence complements. In addition, a complementizer effect for sentence-complement-biased verbs was not due to syntactic misanalysis but was correlated with how strongly a particular verb prefers to be followed by the complementizer that. The results support models that make immediate use of lexically specific constraints, especially constraint-based models, but are problematic for lexical filtering models.",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "faea3dad1f13b8c4be3d4d5ffa88dcf1",
"text": "Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book’s methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives.",
"title": ""
},
{
"docid": "ae28bc02e9f0891d8338980cd169ada4",
"text": "We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listener's subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listener's score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.",
"title": ""
},
{
"docid": "4ed47f48df37717148d985ad927b813f",
"text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.",
"title": ""
},
{
"docid": "e76b94af2a322cb90114ab51fde86919",
"text": "In this paper, we introduce a new 2D modulation scheme referred to as OTFS (Orthogonal Time Frequency & Space) that multiplexes information QAM symbols over new class of carrier waveforms that correspond to localized pulses in a signal representation called the delay-Doppler representation. OTFS constitutes a far reaching generalization of conventional time and frequency modulations such as TDM and FDM and, from a broader perspective, it establishes a conceptual link between Radar and communication. The OTFS waveforms couple with the wireless channel in a way that directly captures the underlying physics, yielding a high-resolution delay-Doppler Radar image of the constituent reflectors. As a result, the time-frequency selective channel is converted into an invariant, separable and orthogonal interaction, where all received QAM symbols experience the same localized impairment and all the delay-Doppler diversity branches are coherently combined. The high resolution delay-Doppler separation of the reflectors enables OTFS to approach channel capacity with optimal performance-complexity tradeoff through linear scaling of spectral efficiency with the MIMO order and robustness to Doppler and multipath channel conditions. OTFS is an enabler for realizing the full promise of MUMIMO gains even in challenging 5G deployment settings where adaptation is unrealistic. 1. OTFS – A NEXT GENERATION MODULATION History teaches us that every transition to a new generation of wireless network involves a disruption in the underlying air interface: beginning with the transition from 2G networks based on single carrier GSM to 3G networks based on code division multiplexing (CDMA), then followed by the transition to contemporary 4G networks based on orthogonal frequency division multiplexing (OFDM). The decision to introduce a new air interface is made when the demands of a new generation of use cases cannot be met by legacy technology – in terms of performance, capabilities, or cost. As an example, the demands for higher capacity data services drove the transition from legacy interference-limited CDMA network (that have limited flexibility for adaptation and inferior achievable throughput) to a network based on an orthogonal narrowband OFDM that is optimally fit for opportunistic scheduling and achieves higher spectral efficiency. Emerging 5G networks are required to support diverse usage scenarios, as described for example in [1]. A fundamental requirement is multi-user MIMO, which holds the promise of massive increases in mobile broadband spectral efficiency using large numbers of antenna elements at the base-station in combination with advanced precoding techniques. This promise comes at the cost of very complex architectures that cannot practically achieve capacity using traditional OFDM techniques and suffers performance degradation in the presence of time and frequency selectivity ( [2] and [3]). Other important use cases include operation under non-trivial dynamic channel conditions (for example vehicle-to-vehicle and high-speed rail) where adaptation becomes unrealistic, rendering OFDM narrowband waveforms strictly suboptimal. As a result, one is once again faced with the dilemma of finding a better suited air interface where the new guiding philosophy is: When adaptation is not a possibility one should look for ways to eliminate the need to adapt. The challenge is to do that without sacrificing performance. To meet this challenge one should fuse together two contradictory principles – (1) the principle of spreading (as used in CDMA) to obtain resilience to narrowband interference and to exploit channel diversity gain for increased reliability under unpredictable channel conditions and (2) the principle of orthogonality (as used in OFDM) to simplify the channel coupling for achieving higher spectral densities with a superior performance-complexity tradeoff. OTFS is a modulation scheme that carries information QAM symbols over a new class of waveforms which are spread over both time and frequency while remaining roughly orthogonal to each other under general delay-Doppler channel impairments. The key characteristic of the OTFS waveforms is related to their optimal manner of interaction with the wireless reflectors. This interaction induces a simple and symmetric coupling",
"title": ""
}
] |
scidocsrr
|
fba44c92f0153a324d800ac71a54c886
|
Gender Representation in Cinematic Content: A Multimodal Approach
|
[
{
"docid": "e95541d0401a196b03b94dd51dd63a4b",
"text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and",
"title": ""
},
{
"docid": "9a5e04b2a6b8e81591a602b0dd81fa10",
"text": "Direct content analysis reveals important details about movies including those of gender representations and potential biases. We investigate the differences between male and female character depictions in movies, based on patterns of language used. Specifically, we use an automatically generated lexicon of linguistic norms characterizing gender ladenness. We use multivariate analysis to investigate gender depictions and correlate them with elements of movie production. The proposed metric differentiates between male and female utterances and exhibits some interesting interactions with movie genres and the screenplay writer gender.",
"title": ""
}
] |
[
{
"docid": "06e3d228e9fac29dab7180e56f087b45",
"text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.",
"title": ""
},
{
"docid": "ba590a4ae3bab635a07054860222744a",
"text": "Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters-an instructor character and two student characters-and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.",
"title": ""
},
{
"docid": "88128ec1201e2202f13f2c09da0f07f2",
"text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: slon@watson.ibm.com. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .",
"title": ""
},
{
"docid": "68257960bdbc6c4f326108ee7ba3e756",
"text": "In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image. Convolutional neural networks achieve good performance on this task, while being computationally efficient. In this paper we carry these ideas over to the problem of assigning a sequence of labels to a set of speech frames, a task commonly known as framewise classification. We show that dense prediction view of framewise classification offers several advantages and insights, including computational efficiency and the ability to apply batch normalization. When doing dense prediction we pay specific attention to strided pooling in time and introduce an asymmetric dilated convolution, called time-dilated convolution, that allows for efficient and elegant implementation of pooling in time. We show that by using time-dilated convolutions with a very deep VGG-style CNN with batch normalization, we achieve best published single model accuracy result on the switchboard-2000 benchmark dataset.",
"title": ""
},
{
"docid": "90813d00050fdb1b8ce1a9dffe858d46",
"text": "Background: Diabetes mellitus is associated with biochemical and pathological alterations in the liver. The aim of this study was to investigate the effects of apple cider vinegar (ACV) on serum biochemical markers and histopathological changes in the liver of diabetic rats for 30 days. Effects were evaluated using streptozotocin (STZ)-induced diabetic rats as an experimental model. Materials and methods: Diabetes mellitus was induced by a single dose of STZ (65 mg/kg) given intraperitoneally. Thirty wistar rats were divided into three groups: control group, STZ-treated group and STZ plus ACV treated group (2 ml/kg BW). Animals were sacrificed 30 days post treatment. Results: Biochemical results indicated that, ACV caused a significant decrease in glucose, TC, LDL-c and a significant increase in HDL-c. Histopathological examination of the liver sections of diabetic rats showed fatty changes in the cytoplasm of the hepatocytes in the form of accumulation of lipid droplets, lymphocytic infiltration. Electron microscopic studies revealed aggregations of polymorphic mitochondria with apparent loss of their cristae and condensed matrices. Besides, the rough endoplasmic reticulum was proliferating and fragmented into smaller stacks. The cytoplasm of the hepatocytes exhibited vacuolations and displayed a large number of lipid droplets of different sizes. On the other hand, the liver sections of diabetic rats treated with ACV showed minimal toxic effects due to streptozotocin. These ultrastructural results revealed that treatment of diabetic rats with ACV led to apparent recovery of the injured hepatocytes. In prophetic medicine, Prophet Muhammad peace is upon him strongly recommended eating vinegar in the Prophetic Hadeeth: \"vinegar is the best edible\". Conclusion: This study showed that ACV, in early stages of diabetes inductioncan decrease the destructive progress of diabetes and cause hepatoprotection against the metabolic damages resulting from streptozotocininduced diabetes mellitus.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "3f0d37296258c68a20da61f34364405d",
"text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.",
"title": ""
},
{
"docid": "3079e9dc5846c73c57f8d7fbf35d94a1",
"text": "Data mining techniques is rapidly increasing in the research of educational domains. Educational data mining aims to discover hidden knowledge and patterns about student performance. This paper proposes a student performance prediction model by applying two classification algorithms: KNN and Naïve Bayes on educational data set of secondary schools, collected from the ministry of education in Gaza Strip for 2015 year. The main objective of such classification may help the ministry of education to improve the performance due to early prediction of student performance. Teachers also can take the proper evaluation to improve student learning. The experimental results show that Naïve Bayes is better than KNN by receiving the highest accuracy value of 93.6%.",
"title": ""
},
{
"docid": "f5f70dca677752bcaa39db59988c088e",
"text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children",
"title": ""
},
{
"docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd",
"text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.",
"title": ""
},
{
"docid": "f0365424e98ebcc0cb06ce51f65cbe7c",
"text": "The most important milestone in the field of magnetic sensors was that AMR sensors started to replace Hall sensors in many application, were larger sensitivity is an advantage. GMR and SDT sensor finally found limited applications. We also review the development in miniaturization of fluxgate sensors and briefly mention SQUIDs, resonant sensors, GMIs and magnetomechanical sensors.",
"title": ""
},
{
"docid": "316ead33d0313804b7aa95570427e375",
"text": "We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markovswitching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumptioninvestment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.",
"title": ""
},
{
"docid": "784c7c785b2e47fad138bba38b753f31",
"text": "A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method. r 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f1977e5f8fbc0df4df0ac6bf1715c254",
"text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.",
"title": ""
},
{
"docid": "7303f634355e24f0dba54daa29ed2737",
"text": "A power divider/combiner based on a double sided slotted waveguide geometry suitable for Ka-band applications is proposed. This structure allows up to 50% reduction of the total device length compared to previous designs of this type without compromising manufacturing complexity or combining efficiency. Efficient design guidelines based on an equivalent circuit technique are provided and the performance is demonstrated by means of a 12-way divider/combiner prototype operating in the range 29-31 GHz. Numerical simulations show that back to back insertion loss of 1.19 dB can be achieved, corresponding to a combining efficiency of 87%. The design is validated by means of manufacturing and testing an experimental prototype with measured back-to-back insertion loss of 1.83 dB with a 3 dB bandwidth of 20.8%, corresponding to a combining efficiency of 81%.",
"title": ""
},
{
"docid": "c30f721224317a41c1e316c158549d81",
"text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.",
"title": ""
},
{
"docid": "33e45b66cca92f15270500c32a1c0b94",
"text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.",
"title": ""
},
{
"docid": "7e2f657115b3c9163a7fe9b34d95a314",
"text": "Even though several youth fatal suicides have been linked with school victimization, there is lack of evidence on whether cyberbullying victimization causes students to adopt suicidal behaviors. To investigate this issue, I use exogenous state-year variation in cyberbullying laws and information on high school students from the Youth Risk Behavioral Survey within a bivariate probit framework, and complement these estimates with matching techniques. I find that cyberbullying has a strong impact on all suicidal behaviors: it increases suicidal thoughts by 14.5 percentage points and suicide attempts by 8.7 percentage points. Even if the focus is on statewide fatal suicide rates, cyberbullying still leads to significant increases in suicide mortality, with these effects being stronger for men than for women. Since cyberbullying laws have an effect on limiting cyberbullying, investing in cyberbullying-preventing strategies can improve individual health by decreasing suicide attempts, and increase the aggregate health stock by decreasing suicide rates.",
"title": ""
},
{
"docid": "f636eb06a1158f4593ce8027d6f274e7",
"text": "Various modifications of bagging for class imbalanced data are discussed. An experimental comparison of known bagging modifications shows that integrating with undersampling is more powerful than oversampling. We introduce Local-and-Over-All Balanced bagging where probability of sampling an example is tuned according to the class distribution inside its neighbourhood. Experiments indicate that this proposal is competitive to best undersampling bagging extensions.",
"title": ""
},
{
"docid": "bffbc725b52468b41c53b156f6eadedb",
"text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.",
"title": ""
}
] |
scidocsrr
|
9f3f6a7f77273a5f2de21be1d5f5ae3d
|
Smart Grid Cybersecurity: Standards and Technical Countermeasures
|
[
{
"docid": "8d21369604ad890704d535785c8e3171",
"text": "With the integration of advanced computing and communication technologies, smart grid is considered as the next-generation power system, which promises self healing, resilience, sustainability, and efficiency to the energy critical infrastructure. The smart grid innovation brings enormous challenges and initiatives across both industry and academia, in which the security issue emerges to be a critical concern. In this paper, we present a survey of recent security advances in smart grid, by a data driven approach. Compared with existing related works, our survey is centered around the security vulnerabilities and solutions within the entire lifecycle of smart grid data, which are systematically decomposed into four sequential stages: 1) data generation; 2) data acquisition; 3) data storage; and 4) data processing. Moreover, we further review the security analytics in smart grid, which employs data analytics to ensure smart grid security. Finally, an effort to shed light on potential future research concludes this paper.",
"title": ""
}
] |
[
{
"docid": "081e474c622f122832490a54657e5051",
"text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.",
"title": ""
},
{
"docid": "8b6d5e7526e58ce66cf897d17b094a91",
"text": "Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.",
"title": ""
},
{
"docid": "5491dd183e386ada396b237a41d907aa",
"text": "The technique of scale multiplication is analyzed in the framework of Canny edge detection. A scale multiplication function is defined as the product of the responses of the detection filter at two scales. Edge maps are constructed as the local maxima by thresholding the scale multiplication results. The detection and localization criteria of the scale multiplication are derived. At a small loss in the detection criterion, the localization criterion can be much improved by scale multiplication. The product of the two criteria for scale multiplication is greater than that for a single scale, which leads to better edge detection performance. Experimental results are presented.",
"title": ""
},
{
"docid": "046f2b6ec65903d092f8576cd210d7ee",
"text": "Aim\nThe principal study objective was to investigate the pharmacokinetic characteristics and determine the absolute bioavailability and tolerability of a new sublingual (SL) buprenorphine wafer.\n\n\nMethods\nThe study was of open label, two-way randomized crossover design in 14 fasted healthy male and female volunteers. Each participant, under naltrexone block, received either a single intravenous dose of 300 mcg of buprenorphine as a constant infusion over five minutes or a sublingual dose of 800 mcg of buprenorphine in two treatment periods separated by a seven-day washout period. Blood sampling for plasma drug assay was taken on 16 occasions throughout a 48-hour period (predose and at 10, 20, 30, and 45 minutes, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 24 and 48 hours postdose). The pharmacokinetic parameters were determined by noncompartmental analyses of the buprenorphine plasma concentration-time profiles. Local tolerability was assessed using modified Likert scales.\n\n\nResults\nThe absolute bioavailability of SL buprenorphine was 45.4% (95% confidence interval = 37.8-54.3%). The median times to peak plasma concentration were 10 minutes and 60 minutes after IV and SL administration, respectively. The peak plasma concentration was 2.65 ng/mL and 0.74 ng/mL after IV and SL administration, respectively. The half-lives were 9.1 hours and 11.2 hours after IV and SL administration, respectively. The wafer had very good local tolerability.\n\n\nConclusions\nThis novel sublingual buprenorphine wafer has high bioavailability and reduced Tmax compared with other SL tablet formulations of buprenorphine. The wafer displayed very good local tolerability. The results suggest that this novel buprenorphine wafer may provide enhanced clinical utility in the management of both acute and chronic pain.\n\n\nBackground\nBuprenorphine is approved for use in pain management and opioid addiction. Sublingual administration of buprenorphine is a simple and noninvasive route of administration and has been available for many years. Improved sublingual formulations may lead to increased utilization of this useful drug for acute and chronic pain management.",
"title": ""
},
{
"docid": "d32bdf27607455fb3416a4e3e3492f01",
"text": "Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.",
"title": ""
},
{
"docid": "bf8000b2119a5107041abf09762668ab",
"text": "With the popularity of social media, people are more and more interested in mining opinions from it. Learning from social media not only has value for research, but also good for business use. RepLab 2012 had Profiling task and Monitoring task to understand the company related tweets. Profiling task aims to determine the Ambiguity and Polarity for tweets. In order to determine this Ambiguity and Polarity for the tweets in RepLab 2012 Profiling task, we built Google Adwords Filter for Ambiguity and several approaches like SentiWordNet, Happiness Score and Machine Learning for Polarity. We achieved good performance in the training set, and the performance in test set is also acceptable.",
"title": ""
},
{
"docid": "8f6682ddcc435c95ae3ef35ebb84de7f",
"text": "A series of 59 patients was treated and operated on for pain felt over the area of the ischial tuberosity and radiating down the back of the thigh. This condition was labeled as the \"hamstring syndrome.\" Pain was typically incurred by assuming a sitting position, stretching the affected posterior thigh, and running fast. The patients usually had a history of recurrent hamstring \"tears.\" Their symptoms were caused by the tight, tendinous structures of the lateral insertion area of the hamstring muscles to the ischial tuberosity. Upon division of these structures, complete relief was obtained in 52 of the 59 patients.",
"title": ""
},
{
"docid": "9bd08edae8ab7b20aab40e24f6bdf968",
"text": "Personalized Web browsing and search hope to provide Web information that matches a user’s personal interests and thus provide more effective and efficient information access. A key feature in developing successful personalized Web applications is to build user profiles that accurately represent a user’ s interests. The main goal of this research is to investigate techniques that implicitly build ontology-based user profiles. We build the profiles without user interaction, automatically monitoring the user’s browsing habits. After building the initial profile from visited Web pages, we investigate techniques to improve the accuracy of the user profile. In particular, we focus on how quickly we can achieve profile stability, how to identify the most important concepts, the effect of depth in the concept-hierarchy on the importance of a concept, and how many levels from the hierarchy should be used to represent the user. Our major findings are that ranking the concepts in the profiles by number of documents assigned to them rather than by accumulated weights provides better profile accuracy. We are also able to identify stable concepts in the profile, thus allowing us to detect long-term user interests. We found that the accuracy of concept detection decreases as we descend them in the concept hierarchy, however this loss of accuracy must be balanced against the detailed view of the user available only through the inclusion of lower-level concepts.",
"title": ""
},
{
"docid": "90d5aca626d61806c2af3cc551b28c90",
"text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.",
"title": ""
},
{
"docid": "cdf2235bea299131929700406792452c",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
},
{
"docid": "95be4f5132cde3c637c5ee217b5c8405",
"text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.",
"title": ""
},
{
"docid": "db26d71ec62388e5367eb0f2bb45ad40",
"text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th",
"title": ""
},
{
"docid": "ce8f000fa9a9ec51b8b2b63e98cec5fb",
"text": "The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.",
"title": ""
},
{
"docid": "36b4097c3c394352dc2b7ac25ff4948f",
"text": "An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.",
"title": ""
},
{
"docid": "268e434cedbf5439612b2197be73a521",
"text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.",
"title": ""
},
{
"docid": "62ff5888ad0c8065097603da8ff79cd6",
"text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.",
"title": ""
},
{
"docid": "3a4a875dc1cc491d8a7ce373043b3937",
"text": "In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks. Keywords—Outlier detection, generative adversary networks, semi-supervised learning.",
"title": ""
},
{
"docid": "20b7da7c9f630f12b0ef86d92ed7aa0f",
"text": "In this paper, a Rectangular Dielectric Resonator Antenna (RDRA) with a modified feeding line is designed and investigated at 28GHz. The modified feed line is designed to excite the DR with relative permittivity of 10 which contributes to a wide bandwidth operation. The proposed single RDRA has been fabricated and mounted on a RT/Duroid 5880 (εr = 2.2 and tanδ = 0.0009) substrate. The optimized single element has been applied to array structure to improve the gain and achieve the required gain performance. The radiation pattern, impedance bandwidth and gain are simulated and measured accordingly. The number of elements and element spacing are studied for an optimum performance. The proposed antenna obtains a reflection coefficient response from 27.0GHz to 29.1GHz which cover the desired frequency band. This makes the proposed antenna achieve 2.1GHz impedance bandwidth and gain of 12.1 dB. Thus, it has potential for millimeter wave and 5G applications.",
"title": ""
},
{
"docid": "b01436481aa77ebe7538e760132c5f3c",
"text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.",
"title": ""
},
{
"docid": "34f83c7dde28c720f82581804accfa71",
"text": "The main threats to human health from heavy metals are associated with exposure to lead, cadmium, mercury and arsenic. These metals have been extensively studied and their effects on human health regularly reviewed by international bodies such as the WHO. Heavy metals have been used by humans for thousands of years. Although several adverse health effects of heavy metals have been known for a long time, exposure to heavy metals continues, and is even increasing in some parts of the world, in particular in less developed countries, though emissions have declined in most developed countries over the last 100 years. Cadmium compounds are currently mainly used in re-chargeable nickel-cadmium batteries. Cadmium emissions have increased dramatically during the 20th century, one reason being that cadmium-containing products are rarely re-cycled, but often dumped together with household waste. Cigarette smoking is a major source of cadmium exposure. In non-smokers, food is the most important source of cadmium exposure. Recent data indicate that adverse health effects of cadmium exposure may occur at lower exposure levels than previously anticipated, primarily in the form of kidney damage but possibly also bone effects and fractures. Many individuals in Europe already exceed these exposure levels and the margin is very narrow for large groups. Therefore, measures should be taken to reduce cadmium exposure in the general population in order to minimize the risk of adverse health effects. The general population is primarily exposed to mercury via food, fish being a major source of methyl mercury exposure, and dental amalgam. The general population does not face a significant health risk from methyl mercury, although certain groups with high fish consumption may attain blood levels associated with a low risk of neurological damage to adults. Since there is a risk to the fetus in particular, pregnant women should avoid a high intake of certain fish, such as shark, swordfish and tuna; fish (such as pike, walleye and bass) taken from polluted fresh waters should especially be avoided. There has been a debate on the safety of dental amalgams and claims have been made that mercury from amalgam may cause a variety of diseases. However, there are no studies so far that have been able to show any associations between amalgam fillings and ill health. The general population is exposed to lead from air and food in roughly equal proportions. During the last century, lead emissions to ambient air have caused considerable pollution, mainly due to lead emissions from petrol. Children are particularly susceptible to lead exposure due to high gastrointestinal uptake and the permeable blood-brain barrier. Blood levels in children should be reduced below the levels so far considered acceptable, recent data indicating that there may be neurotoxic effects of lead at lower levels of exposure than previously anticipated. Although lead in petrol has dramatically decreased over the last decades, thereby reducing environmental exposure, phasing out any remaining uses of lead additives in motor fuels should be encouraged. The use of lead-based paints should be abandoned, and lead should not be used in food containers. In particular, the public should be aware of glazed food containers, which may leach lead into food. Exposure to arsenic is mainly via intake of food and drinking water, food being the most important source in most populations. Long-term exposure to arsenic in drinking-water is mainly related to increased risks of skin cancer, but also some other cancers, as well as other skin lesions such as hyperkeratosis and pigmentation changes. Occupational exposure to arsenic, primarily by inhalation, is causally associated with lung cancer. Clear exposure-response relationships and high risks have been observed.",
"title": ""
}
] |
scidocsrr
|
0d8bb5e4e9f9c79d2ac85ba47e2e990c
|
Image Segmentation using Fuzzy C Means Clustering: A survey
|
[
{
"docid": "2c8e7bfcd41924d0fe8f66166d366751",
"text": "-Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results. Image segmentation Fuzzy sets Markov Random Field Thresholding Edge detection Clustering Relaxation",
"title": ""
}
] |
[
{
"docid": "9c0d65ee42ccfaa291b576568bad59e0",
"text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.",
"title": ""
},
{
"docid": "e50b074abe37cc8caec8e3922347e0d9",
"text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.",
"title": ""
},
{
"docid": "6afad353d7dec9fce0e5e4531fd08cf3",
"text": "This paper describes some new developments in the application of power electronics to automotive power generation and control. A new load-matching technique is introduced that uses a simple switched-mode rectifier to achieve dramatic increases in peak and average power output from a conventional Lundell alternator, along with substantial improvements in efficiency. Experimental results demonstrate these capability improvements. Additional performance and functionality improvements of particular value for high-voltage (e.g., 42 V) alternators are also demonstrated. Tight load-dump transient suppression can be achieved using this new architecture. It is also shown that the alternator system can be used to implement jump charging (the charging of the high-voltage system battery from a low-voltage source). Dual-output extensions of the technique (e.g., 42/14 V) are also introduced. The new technology preserves the simplicity and low cost of conventional alternator designs, and can be implemented within the existing manufacturing infrastructure.",
"title": ""
},
{
"docid": "b09cacfb35cd02f6a5345c206347c6ae",
"text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "71819107f543aa2b20b070e322cf1bbb",
"text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.",
"title": ""
},
{
"docid": "857e9430ebc5cf6aad2737a0ce10941e",
"text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.",
"title": ""
},
{
"docid": "166ea8466f5debc7c09880ba17c819e1",
"text": "Lymphoepithelioma-like carcinoma (LELCA) of the urinary bladder is a rare variant of bladder cancer characterized by a malignant epithelial component densely infiltrated by lymphoid cells. It is characterized by indistinct cytoplasmic borders and a syncytial growth pattern. These neoplasms deserve recognition and attention, chiefly because they may be responsive to chemotherapy. We report on the clinicopathologic features of 13 cases of LELCA recorded since 1981. The chief complaint in all 13 patients was hematuria. Their ages ranged from 58 years to 82 years. All tumors were muscle invasive. A significant lymphocytic reaction was present in all of these tumors. There were three pure LELCA and six predominant LELCA with a concurrent transitional cell carcinoma (TCC). The remainder four cases had a focal LELCA component admixed with TCC. Immunohistochemistry showed LELCA to be reactive against epithelial membrane antigen and several cytokeratins (CKs; AE1/AE3, AE1, AE3, CK7, and CK8). CK20 and CD44v6 stained focally. The lymphocytic component was composed of a mixture of T and B cells intermingled with some dendritic cells and histiocytes. Latent membrane protein 1 (LMP1) immunostaining and in situ hybridization for Epstein-Barr virus were negative in all 13 cases. DNA ploidy of these tumors gave DNA histograms with diploid peaks (n=7) or non-diploid peaks (aneuploid or tetraploid; n=6). All patients with pure and 66% with predominant LELCA were alive, while all patients having focal LELCA died of disease. Our data suggest that pure and predominant LELCA of the bladder appear to be morphologically and clinically different from other bladder (undifferentiated and poorly differentiated conventional TCC) carcinomas and should be recognized as separate clinicopathological variants of TCC with heavy lymphocytic reaction relevant in patient management.",
"title": ""
},
{
"docid": "6d50ff00babb00d36a30fdc769091b7e",
"text": "The purpose of Advanced Driver Assistance Systems (ADAS) is that driver error will be reduced or even eliminated, and efficiency in traffic and transport is enhanced. The benefits of ADAS implementations are potentially considerable because of a significant decrease in human suffering, economical cost and pollution. However, there are also potential problems to be expected, since the task of driving a ordinary motor vehicle is changing in nature, in the direction of supervising a (partly) automated moving vehicle.",
"title": ""
},
{
"docid": "bb295b25353ecdf85a104ee5a928c313",
"text": "There is growing conviction that the future of computing depends on our ability to exploit big data on theWeb to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in current systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g.quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "c207f2c0dfc1ecee332df70ec5810459",
"text": "Hierarchical organization-the recursive composition of sub-modules-is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force-the cost of connections-promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.",
"title": ""
},
{
"docid": "37ed4c0703266525a7d62ca98dd65e0f",
"text": "Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people-their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience.",
"title": ""
},
{
"docid": "98729fc6a6b95222e6a6a12aa9a7ded7",
"text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.",
"title": ""
},
{
"docid": "5c64b25ae243ad010ee15e39e5d824e3",
"text": "This paper examines the work and interactions between camera operators and a vision mixer during an ice hockey match, and presents an interaction analysis using video data. We analyze video-mediated indexical gestures in the collaborative production of live sport on television between distributed team members. The findings demonstrate how video forms the topic, resource and product of collabora-tion: whilst it shapes the nature of the work (editing), it is simultaneously also the primary resource for supporting mutual orientation and negotiating shot transitions between remote participants (co-ordination), as well as its end prod-uct (broadcast). Our analysis of current professional activi-ties is used to develop implications for the design of future services for live collaborative video production.",
"title": ""
},
{
"docid": "ec85dafd4c0f04d3e573941b397b3f10",
"text": "The future of communication resides in Internet of Things, which is certainly the most sought after technology today. The applications of IoT are diverse, and range from ordinary voice recognition to critical space programmes. Recently, a lot of efforts have been made to design operating systems for IoT devices because neither traditional Windows/Unix, nor the existing Real Time Operating Systems are able to meet the demands of heterogeneous IoT applications. This paper presents a survey of operating systems that have been designed so far for IoT devices and also outlines a generic framework that brings out the essential features desired in an OS tailored for IoT devices.",
"title": ""
},
{
"docid": "5ee5f4450ecc89b684e90e7b846f8365",
"text": "This study scrutinizes the predictive relationship between three referral channels, search engine, social medial, and third-party advertising, and online consumer search and purchase. The results derived from vector autoregressive models suggest that the three channels have differential predictive relationship with sale measures. The predictive power of the three channels is also considerably different in referring customers among competing online shopping websites. In the short run, referrals from all three channels have a significantly positive predictive relationship with the focal online store’s sales amount and volume, but having no significant relationship with conversion. Only referrals from search engines to the rival website have a significantly negative predictive relationship with the focal website’s sales and volume. In the long run, referrals from all three channels have a significant positive predictive relationship with the focal website’s sales, conversion and sales volume. In contrast, referrals from all three channels to the competing online stores have a significant negative predictive relationship with the focal website’s sales, conversion and sales volume. Our results also show that search engine referrals explains the most of the variance in sales, while social media referrals explains the most of the variance in conversion and third party ads referrals explains the most of the variance in sales volume. This study offers new insights for IT and marketing practitioners in respect to better and deeper understanding on marketing attribution and how different channels perform in order to optimize the media mix and overall performance.",
"title": ""
},
{
"docid": "1615e93f027c6f6f400ce1cc7a1bb8aa",
"text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other",
"title": ""
},
{
"docid": "7203aedbdb4c3b42c34dafdefe082b63",
"text": "We discuss silver ink as a low cost option for manufacturing RFID tags at ultra high frequencies (UHF). An analysis of two different RFID tag antennas, made from silver ink and from copper, is presented at UHF. The influence of each material on tag performance is discussed along with simulation results and measurement data which are in good agreement. It is observed that RFID tag performance depends both on material and on the shape of the antenna. For some classes of antennas, silver ink with higher conductivity performs as well as copper, which makes it an attractive low cost alternative material to copper for RFID tag antennas.",
"title": ""
},
{
"docid": "e35194cb3fdd3edee6eac35c45b2da83",
"text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.",
"title": ""
}
] |
scidocsrr
|
240a10a3748a237c47aff9013c7e3949
|
Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation
|
[
{
"docid": "59b10765f9125e9c38858af901a39cc7",
"text": "--------__------------------------------------__---------------",
"title": ""
},
{
"docid": "9a4ca8c02ffb45013115124011e7417e",
"text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
}
] |
[
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "74287743f75368623da74e716ae8e263",
"text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b6ec4629a39097178895762a35e0c7eb",
"text": "In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers’ opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers’ opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of documentlevel sentiment classification, and improve the performance significantly.",
"title": ""
},
{
"docid": "5b43cce2027f1e5afbf7985ca2d4af1a",
"text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.",
"title": ""
},
{
"docid": "b5de3747c17f6913539b62377f9af5c4",
"text": "In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets WN18RR, FB15k-237, WN11 and FB13. We further apply our ConvKB to a search personalization problem which aims to tailor the search results to each specific user based on the user’s personal interests and preferences. In particular, we model the potential relationship between the submitted query, the user and the search result (i.e., document) as a triple (query, user, document) on which the ConvKB is able to work. Experimental results on query logs from a commercial web search engine show that ConvKB achieves better performances than the standard ranker as well as strong search personalization baselines.",
"title": ""
},
{
"docid": "32a2bfb7a26631f435f9cb5d825d8da2",
"text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.",
"title": ""
},
{
"docid": "15ada8f138d89c52737cfb99d73219f0",
"text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "eb6823bcc7e01dbdc9a21388bde0ce4f",
"text": "This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of humancentred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation. Theor. Issues in Ergon. Sci., 2003, 1–40, preview article",
"title": ""
},
{
"docid": "2fe1ed0f57e073372e4145121e87d7c6",
"text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.",
"title": ""
},
{
"docid": "a28c252f9f3e96869c72e6e41146b5bc",
"text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.",
"title": ""
},
{
"docid": "040329beb0f4688ced46d87a51dac169",
"text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.",
"title": ""
},
{
"docid": "067e24b29aae26865c858d6b8e60b135",
"text": "In this paper, we present an optimization path of stress memorization technique (SMT) for 45nm node and below using a nitride capping layer. We demonstrate that the understanding of coupling between nitride properties, dopant activation and poly-silicon gate mechanical stress allows enhancing nMOS performance by 7% without pMOS degradation. In contrast to previously reported works on SMT (Chen et al., 2004) - (Singh et al., 2005), a low-cost process compatible with consumer electronics requirements has been successfully developed",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "26b13a3c03014fc910ed973c264e4c9d",
"text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.",
"title": ""
},
{
"docid": "82119f5c85eaa2c4a76b2c7b0561375c",
"text": "A system is described that integrates vision and tactile sensing in a robotics environment to perform object recognition tasks. It uses multiple sensor systems (active touch and passive stereo vision) to compute three dimensional primitives that can be matched against a model data base of complex curved surface objects containing holes and cavities. The low level sensing elements provide local surface and feature matches which arc constrained by relational criteria embedded in the models. Once a model has been invoked, a verification procedure establishes confidence measures for a correct recognition. The three dimen* sional nature of the sensed data makes the matching process more robust as does the system's ability to sense visually occluded areas with touch. The model is hierarchic in nature and allows matching at different levels to provide support or inhibition for recognition. 1. INTRODUCTION Robotic systems are being designed and built to perform complex tasks such as object recognition, grasping, parts manipulation, inspection and measurement. In the case of object recognition, many systems have been designed that have tried to exploit a single sensing modality [1,2,3,4,5,6]. Single sensor systems are necessarily limited in their power. The approach described here to overcome the inherent limitations of a single sensing modality is to integrate multiple sensing modalities (passive stereo vision and active tactile sensing) for object recognition. The advantages of multiple sensory systems in a task like this are many. Multiple sensor systems supply redundant and complementary kinds of data that can be integrated to create a more coherent understanding of a scene. The inclusion of multiple sensing systems is becoming more apparent as research continues in distributed systems and parallel approaches to problem solving. The redundancy and support for a hypothesis that comes from more than one sensing subsystem is important in establishing confidence measures during a recognition process, just as the disagreement between two sensors will inhibit a hypothesis and point to possible sensing or reasoning error. The complementary nature of these sensors allows more powerful matching primitives to be used. The primitives that are the outcome of sensing with these complementary sensors are throe dimensional in nature, providing stronger invariants and a more natural way to recognize objects which are also three dimensional in nature [7].",
"title": ""
},
{
"docid": "ed22fe0d13d4450005abe653f41df2c0",
"text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.",
"title": ""
},
{
"docid": "d07281bab772b6ba613f9526d418661e",
"text": "GSM (Global Services of Mobile Communications) 1800 licenses were granted in the beginning of the 2000’s in Turkey. Especially in the installation phase of the wireless telecom services, fraud usage can be an important source of revenue loss. Fraud can be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is the name of the activities to identify unauthorized usage and prevent losses for the mobile network operators’. Mobile phone user’s intentions may be predicted by the call detail records (CDRs) by using data mining (DM) techniques. This study compares various data mining techniques to obtain the best practical solution for the telecom fraud detection and offers the Adaptive Neuro Fuzzy Inference (ANFIS) method as a means to efficient fraud detection. In the test run, shown that ANFIS has provided sensitivity of 97% and specificity of 99%, where it classified 98.33% of the instances correctly.",
"title": ""
},
{
"docid": "0e2a2a32923d8e9fa5779e80e6090dba",
"text": "The most powerful and common approach to countering the threats to network / information security is encryption [1]. Even though it is very powerful, the cryptanalysts are very intelligent and they were working day and night to break the ciphers. To make a stronger cipher it is recommended that to use: More stronger and complicated encryption algorithms, Keys with more number of bits (Longer keys), larger block size as input to process, use authentication and confidentiality and secure transmission of keys. It is for sure that if we follow all the mentioned principles we can make a very stronger cipher. With this we have the following problems: It is a time consuming process for both encryption and decryption, It is difficult for the crypt analyzer to analyze the problem. Also suffers with the problems in the existing system. The main objective of this paper is to solve all these problems and to bring the revolution in the Network security with a new substitution technique [3] is ‘color substitution technique’ and named as a “Play color cipher”.",
"title": ""
}
] |
scidocsrr
|
1e0b95ca31bb557a980e9560c4e479c5
|
Trilinear Tensor: The Fundamental Construct of Multiple-view Geometry and Its Applications
|
[
{
"docid": "5aa5ebf7727ea1b5dcf4d8f74b13cb29",
"text": "Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper, we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. IfJLk{M1,.” .Mk} is the set of pictures representing a given object and P is the 2-D image of an object to be recognized, then P is considered to be an instance of M if P= C~=,aiMi for some constants (pi. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries and can also handle nonrigid transformations. The paper is divided into two parts. In the first part, we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part, we suggest how this linear combination property may be used in the recognition process.",
"title": ""
}
] |
[
{
"docid": "2476c8b7f6fe148ab20c29e7f59f5b23",
"text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.",
"title": ""
},
{
"docid": "65ed76ddd6f7fd0aea717d2e2643dd16",
"text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "8e0b61e82179cc39b4df3d06448a3d14",
"text": "The antibacterial activity and antioxidant effect of the compounds α-terpineol, linalool, eucalyptol and α-pinene obtained from essential oils (EOs), against pathogenic and spoilage forming bacteria were determined. The antibacterial activities of these compounds were observed in vitro on four Gram-negative and three Gram-positive strains. S. putrefaciens was the most resistant bacteria to all tested components, with MIC values of 2% or higher, whereas E. coli O157:H7 was the most sensitive strain among the tested bacteria. Eucalyptol extended the lag phase of S. Typhimurium, E. coli O157:H7 and S. aureus at the concentrations of 0.7%, 0.6% and 1%, respectively. In vitro cell growth experiments showed the tested compounds had toxic effects on all bacterial species with different level of potency. Synergistic and additive effects were observed at least one dose pair of combination against S. Typhimurium, E. coli O157:H7 and S. aureus, however antagonistic effects were not found in these combinations. The results of this first study are encouraging for further investigations on mechanisms of antimicrobial activity of these EO components.",
"title": ""
},
{
"docid": "204ad3064d559c345caa2c6d1a140582",
"text": "In this paper, a face recognition method based on Convolution Neural Network (CNN) is presented. This network consists of three convolution layers, two pooling layers, two full-connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used to train the feature extractor and the classifier, which can extract the facial features and classify them automatically. The Dropout method is used to solve the over-fitting problem. The Convolution Architecture For Feature Extraction framework (Caffe) is used during the training and testing process. The face recognition rate of the ORL face database and AR face database based on this network is 99.82% and 99.78%.",
"title": ""
},
{
"docid": "c8948a93e138ca0ac8cae3247dc9c81a",
"text": "Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "34bd41f7384d6ee4d882a39aec167b3e",
"text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.",
"title": ""
},
{
"docid": "bc8950644ded24618a65c4fcef302044",
"text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.",
"title": ""
},
{
"docid": "f4baeef21537029511a59edbbe7f2741",
"text": "Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application’s capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering.",
"title": ""
},
{
"docid": "dc2e98a7fbaf8b3dedd6eaf34730a9d3",
"text": "Cultural issues impact on health care, including individuals’ health care behaviours and beliefs. Hasidic Jews, with their strict religious observance, emphasis on kabbalah, cultural insularity and spiritual leader, their Rebbe, comprise a distinct cultural group. The reviewed studies reveal that Hasidic Jews may seek spiritual healing and incorporate religion in their explanatory models of illness; illness attracts stigma; psychiatric patients’ symptomatology may have religious content; social and cultural factors may challenge health care delivery. The extant research has implications for clinical practice. However, many studies exhibited methodological shortcomings with authors providing incomplete analyses of the extent to which findings are authentically Hasidic. High-quality research is required to better inform the provision of culturally competent care to Hasidic patients.",
"title": ""
},
{
"docid": "17b66811d671fbe77a935a9028c954ce",
"text": "Research in management information systems often examines computer literacy as an independent variable. Study subjects may be asked to self-report their computer literacy and that literacy is then utilized as a research variable. However, it is not known whether self-reported computer literacy is a valid measure of a subject’s actual computer literacy. The research presented in this paper examined the question of whether self-reported computer literacy can be a reliable indication of actual computer literacy and therefore valid for use in empirical research. Study participants were surveyed and asked to self-report their level of computer literacy. Following, subjects were tested to determine an objective measure of computer literacy. The data analysis determined that self-reported computer literacy is not reliable. Results of this research are important for academic programs, for businesses, and for future empirical studies in management information systems.",
"title": ""
},
{
"docid": "ac29c2091012ccfac993cc706eadbf3c",
"text": "In this study 40 genotypes in a randomized complete block design with three replications for two years were planted in the region of Ardabil. The yield related data and its components over the years of the analysis of variance were combined.Results showed that there was a significant difference between genotypes and genotype interaction in the environment. MLR and ANN methods were used to predict yield in barley. The fitted model in a yield predicting linear regression method was as follows: Reg = 1.75 + 0.883 X1 + 0.05017X2 +1.984X3 Also, yield prediction based on multi-layer neural network (ANN) using the Matlab Perceptron type software with one hidden layer including 15 neurons and using algorithm after error propagation learning method and hyperbolic tangent function was implemented, in both methods absolute values of relative error as a deviation index in order to estimate and using duad t test of mean deviation index of the two estimates was examined. Results showed that in the ANN technique the mean deviation index of estimation significantly was one-third (1 / 3) of its rate in the MLR, because there was a significant interaction between genotype and environment and its impact on estimation by MLR method.Therefore, when the genotype environment interaction is significant, in the yield prediction in instead of the regression is recommended of a neural network approach due to high yield and more velocity in the estimation to be used.",
"title": ""
},
{
"docid": "3a6a97b2705d90b031ab1e065281465b",
"text": "Common (Cinnamomum verum, C. zeylanicum) and cassia (C. aromaticum) cinnamon have a long history of use as spices and flavouring agents. A number of pharmacological and clinical effects have been observed with their use. The objective of this study was to systematically review the scientific literature for preclinical and clinical evidence of safety, efficacy, and pharmacological activity of common and cassia cinnamon. Using the principles of evidence-based practice, we searched 9 electronic databases and compiled data according to the grade of evidence found. One pharmacological study on antioxidant activity and 7 clinical studies on various medical conditions were reported in the scientific literature including type 2 diabetes (3), Helicobacter pylori infection (1), activation of olfactory cortex of the brain (1), oral candidiasis in HIV (1), and chronic salmonellosis (1). Two of 3 randomized clinical trials on type 2 diabetes provided strong scientific evidence that cassia cinnamon demonstrates a therapeutic effect in reducing fasting blood glucose by 10.3%–29%; the third clinical trial did not observe this effect. Cassia cinnamon, however, did not have an effect at lowering glycosylated hemoglobin (HbA1c). One randomized clinical trial reported that cassia cinnamon lowered total cholesterol, low-density lipoprotein cholesterol, and triglycerides; the other 2 trials, however, did not observe this effect. There was good scientific evidence that a species of cinnamon was not effective at eradicating H. pylori infection. Common cinnamon showed weak to very weak evidence of efficacy in treating oral candidiasis in HIV patients and chronic",
"title": ""
},
{
"docid": "e971fd6eac427df9a68f10cad490b2db",
"text": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the 'PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.",
"title": ""
},
{
"docid": "a55224bcd659f67314e7ef31e0fd0756",
"text": "Dopamine neurons located in the midbrain play a role in motivation that regulates approach behavior (approach motivation). In addition, activation and inactivation of dopamine neurons regulate mood and induce reward and aversion, respectively. Accumulating evidence suggests that such motivational role of dopamine neurons is not limited to those located in the ventral tegmental area, but also in the substantia nigra. The present paper reviews previous rodent work concerning dopamine's role in approach motivation and the connectivity of dopamine neurons, and proposes two working models: One concerns the relationship between extracellular dopamine concentration and approach motivation. High, moderate and low concentrations of extracellular dopamine induce euphoric, seeking and aversive states, respectively. The other concerns circuit loops involving the cerebral cortex, basal ganglia, thalamus, epithalamus, and midbrain through which dopaminergic activity alters approach motivation. These models should help to generate hypothesis-driven research and provide insights for understanding altered states associated with drugs of abuse and affective disorders.",
"title": ""
},
{
"docid": "af836023436eaa65ef55f9928312e73f",
"text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.",
"title": ""
},
{
"docid": "43f2dcf2f2260ff140e20380d265105b",
"text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.",
"title": ""
},
{
"docid": "d74131a431ca54f45a494091e576740c",
"text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.",
"title": ""
},
{
"docid": "8a32bdadcaa2c94f83e95c19e400835b",
"text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.",
"title": ""
},
{
"docid": "c0a51f27931d8314b73a7de969bdfb08",
"text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.",
"title": ""
},
{
"docid": "27c2c015c6daaac99b34d00845ec646c",
"text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.",
"title": ""
}
] |
scidocsrr
|
ca1f6291672f5740f5a37125c49d166a
|
Improving Knowledge Graph Embedding Using Simple Constraints
|
[
{
"docid": "5b8b04f29032a6ca94815676d4c4118f",
"text": "Representation learning of knowledge graphs aims to encode both entities and relations into a continuous low-dimensional vector space. Most existing methods only concentrate on learning representations with structured information located in triples, regardless of the rich information located in hierarchical types of entities, which could be collected in most knowledge graphs. In this paper, we propose a novel method named Type-embodied Knowledge Representation Learning (TKRL) to take advantages of hierarchical entity types. We suggest that entities should have multiple representations in different types. More specifically, we consider hierarchical types as projection matrices for entities, with two type encoders designed to model hierarchical structures. Meanwhile, type information is also utilized as relation-specific type constraints. We evaluate our models on two tasks including knowledge graph completion and triple classification, and further explore the performances on long-tail dataset. Experimental results show that our models significantly outperform all baselines on both tasks, especially with long-tail distribution. It indicates that our models are capable of capturing hierarchical type information which is significant when constructing representations of knowledge graphs. The source code of this paper can be obtained from https://github.com/thunlp/TKRL.",
"title": ""
},
{
"docid": "18ad179d4817cb391ac332dcbfe13788",
"text": "Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline — our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyperparameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.",
"title": ""
}
] |
[
{
"docid": "6a1d534737dcbe75ff7a7ac975bcc5ec",
"text": "Crime is one of the most important social problems in the country, affecting public safety, children development, and adult socioeconomic status. Understanding what factors cause higher crime is critical for policy makers in their efforts to reduce crime and increase citizens' life quality. We tackle a fundamental problem in our paper: crime rate inference at the neighborhood level. Traditional approaches have used demographics and geographical influences to estimate crime rates in a region. With the fast development of positioning technology and prevalence of mobile devices, a large amount of modern urban data have been collected and such big data can provide new perspectives for understanding crime. In this paper, we used large-scale Point-Of-Interest data and taxi flow data in the city of Chicago, IL in the USA. We observed significantly improved performance in crime rate inference compared to using traditional features. Such an improvement is consistent over multiple years. We also show that these new features are significant in the feature importance analysis.",
"title": ""
},
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "43397d704a8fc64ec150c847d77280d5",
"text": "During the development or maintenance of an Android app, the app developer needs to determine the app's security and privacy requirements such as permission requirements. Permission requirements include two folds. First, what permissions (i.e., access to sensitive resources, e.g., location or contact list) the app needs to request. Second, how to explain the reason of permission usages to users. In this paper, we focus on the multiple challenges that developers face when creating permission-usage explanations. We propose a novel framework, CLAP, that mines potential explanations from the descriptions of similar apps. CLAP leverages information retrieval and text summarization techniques to find frequent permission usages. We evaluate CLAP on a large dataset containing 1.4 million Android apps. The evaluation results outperform existing state-of-the-art approaches, showing great promise of CLAP as a tool for assisting developers and permission requirements discovery.",
"title": ""
},
{
"docid": "4e0a3dd1401a00ddc9d0620de93f4ecc",
"text": "The spatial-numerical association of response codes (SNARC) effect is the tendency for humans to respond faster to relatively larger numbers on the left or right (or with the left or right hand) and faster to relatively smaller numbers on the other side. This effect seems to occur due to a spatial representation of magnitude either in occurrence with a number line (wherein participants respond to relatively larger numbers faster on the right), other representations such as clock faces (responses are reversed from number lines), or culturally specific reading directions, begging the question as to whether the effect may be limited to humans. Given that a SNARC effect has emerged via a quantity judgement task in Western lowland gorillas and orangutans (Gazes et al., Cog 168:312–319, 2017), we examined patterns of response on a quantity discrimination task in American black bears, Western lowland gorillas, and humans for evidence of a SNARC effect. We found limited evidence for SNARC effect in American black bears and Western lowland gorillas. Furthermore, humans were inconsistent in direction and strength of effects, emphasizing the importance of standardizing methodology and analyses when comparing SNARC effects between species. These data reveal the importance of collecting data with humans in analogous procedures when testing nonhumans for effects assumed to bepresent in humans.",
"title": ""
},
{
"docid": "e8e658d677a3b1a23650b25edd32fc84",
"text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.",
"title": ""
},
{
"docid": "5320ff5b9e2a3d0d206bb74ed0e047cd",
"text": "To the Editor: How do Shai et al. (July 17 issue)1 explain why the subjects in their study regained weight between month 6 and month 24, despite a reported reduction of 300 to 600 calories per day? Contributing possibilities may include the notion that a food-frequency questionnaire cannot precisely determine energy or macronutrient intake but, rather, ascertains general dietary patterns. Certain populations may underreport intake2,3 and have a decreased metabolic rate. The authors did not measure body composition, which is critical for documenting weight-loss components. In addition, the titles of the diets that are described in the article are misleading. Labeling the “low-carbohydrate” diet as such is questionable, since 40 to 42% of calories were from carbohydrates from month 6 to month 24, and data regarding ketosis support this view. Participants in the low-fat and Mediterranean-diet groups consumed between 30% and 33% of calories from fat and did not increase fiber consumption, highlighting the importance of diet quality. Furthermore, the authors should have provided baseline values and P values for within-group changes from baseline (see Table 2 of the article). Contrary to the authors’ assertion, it is not surprising that the effects on many biomarkers were minimal, since the dietary changes were minimal. The absence of biologically significant weight loss (2 to 4% after 2 years) highlights the fact that energy restriction and weight loss in themselves may minimally affect metabolic outcomes and that lifestyle changes must incorporate physical activity to optimize the reduction in the risk of chronic disease.4,5 Christian K. Roberts, Ph.D. R. James Barnard, Ph.D. Daniel M. Croymans, B.S.",
"title": ""
},
{
"docid": "39321bc85746dc43736a0435c939c7da",
"text": "We use recent network calculus results to study some properties of lossless multiplexing as it may be used in guaranteed service networks. We call network calculus a set of results that apply min-plus algebra to packet networks. We provide a simple proof that shaping a traffic stream to conform to a burstiness constraint preserves the original constraints satisfied by the traffic stream We show how all rate-based packet schedulers can be modeled with a simple rate latency service curve. Then we define a general form of deterministic effective bandwidth and equivalent capacity. We find that call acceptance regions based on deterministic criteria (loss or delay) are convex, in contrast to statistical cases where it is the complement of the region which is convex. We thus find that, in general, the limit of the call acceptance region based on statistical multiplexing when the loss probability target tends to 0 may be strictly larger than the call acceptance region based on lossless multiplexing. Finally, we consider the problem of determining the optimal parameters of a variable bit rate (VBR) connection when it is used as a trunk, or tunnel, given that the input traffic is known. We find that there is an optimal peak rate for the VBR trunk, essentially insensitive to the optimization criteria. For a linear cost function, we find an explicit algorithm for the optimal remaining parameters of the VBR trunk.",
"title": ""
},
{
"docid": "4457aa3443d756a4afeb76f0571d3e25",
"text": "THE AMOUNT OF DATA BEING DIGITALLY COLLECTED AND stored is vast and expanding rapidly. As a result, the science of data management and analysis is also advancing to enable organizations to convert this vast resource into information and knowledge that helps them achieve their objectives. Computer scientists have invented the term big data to describe this evolving technology. Big data has been successfully used in astronomy (eg, the Sloan Digital Sky Survey of telescopic information), retail sales (eg, Walmart’s expansive number of transactions), search engines (eg, Google’s customization of individual searches based on previous web data), and politics (eg, a campaign’s focus of political advertisements on people most likely to support their candidate based on web searches). In this Viewpoint, we discuss the application of big data to health care, using an economic framework to highlight the opportunities it will offer and the roadblocks to implementation. We suggest that leveraging the collection of patient and practitioner data could be an important way to improve quality and efficiency of health care delivery. Widespread uptake of electronic health records (EHRs) has generated massive data sets. A survey by the American Hospital Association showed that adoption of EHRs has doubled from 2009 to 2011, partly a result of funding provided by the Health Information Technology for Economic and Clinical Health Act of 2009. Most EHRs now contain quantitative data (eg, laboratory values), qualitative data (eg, text-based documents and demographics), and transactional data (eg, a record of medication delivery). However, much of this rich data set is currently perceived as a byproduct of health care delivery, rather than a central asset to improve its efficiency. The transition of data from refuse to riches has been key in the big data revolution of other industries. Advances in analytic techniques in the computer sciences, especially in machine learning, have been a major catalyst for dealing with these large information sets. These analytic techniques are in contrast to traditional statistical methods (derived from the social and physical sciences), which are largely not useful for analysis of unstructured data such as text-based documents that do not fit into relational tables. One estimate suggests that 80% of business-related data exist in an unstructured format. The same could probably be said for health care data, a large proportion of which is text-based. In contrast to most consumer service industries, medicine adopted a practice of generating evidence from experimental (randomized trials) and quasi-experimental studies to inform patients and clinicians. The evidence-based movement is founded on the belief that scientific inquiry is superior to expert opinion and testimonials. In this way, medicine was ahead of many other industries in terms of recognizing the value of data and information guiding rational decision making. However, health care has lagged in uptake of newer techniques to leverage the rich information contained in EHRs. There are 4 ways big data may advance the economic mission of health care delivery by improving quality and efficiency. First, big data may greatly expand the capacity to generate new knowledge. The cost of answering many clinical questions prospectively, and even retrospectively, by collecting structured data is prohibitive. Analyzing the unstructured data contained within EHRs using computational techniques (eg, natural language processing to extract medical concepts from free-text documents) permits finer data acquisition in an automated fashion. For instance, automated identification within EHRs using natural language processing was superior in detecting postoperative complications compared with patient safety indicators based on discharge coding. Big data offers the potential to create an observational evidence base for clinical questions that would otherwise not be possible and may be especially helpful with issues of generalizability. The latter issue limits the application of conclusions derived from randomized trials performed on a narrow spectrum of participants to patients who exhibit very different characteristics. Second, big data may help with knowledge dissemination. Most physicians struggle to stay current with the latest evidence guiding clinical practice. The digitization of medical literature has greatly improved access; however, the sheer",
"title": ""
},
{
"docid": "4ba91dc010d3ecbdb39306e9f35f9612",
"text": "Privacy aware anonymous trading for smart grid using digital currency has received very low attention so far. In this paper, we analyze the possibility of Bitcoin serving as the user friendly and effective privacy aware trading currency to facilitate energy exchange for smart grid.",
"title": ""
},
{
"docid": "9a332d9ffe0e08cc688a8644de736202",
"text": "Applications are increasingly using XML to represent semi-structured data and, consequently, a large amount of XML documents is available worldwide. As XML documents evolve over time, comparing XML documents to understand their evolution becomes fundamental. The main focus of existing research for comparing XML documents resides in identifying syntactic changes. However, a deeper notion of the change meaning is usually desired. This paper presents an inference-based XML evolution approach using Prolog to deal with this problem. Differently from existing XML diff approaches, our approach composes multiple syntactic changes, which usually have a common purpose, to infer semantic changes. We evaluated our approach through ten versions of an employment XML document. In this evaluation, we could observe that each new version introduced syntactic changes that could be summarized into semantic changes.",
"title": ""
},
{
"docid": "fcf8649ff7c2972e6ef73f837a3d3f4d",
"text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.",
"title": ""
},
{
"docid": "e43eaf919d7bb920177c164c5eeddca2",
"text": "In today's era AMBA (advanced microcontroller bus architecture) specifications have gone far beyond the Microcontrollers. In this paper, AMBA (Advanced Microcontroller Bus Architecture) ASB APB (Advanced system bus - Advanced Peripheral Bus) is implemented. The goal of the proposed paper is to synthesis, simulate complex interface between AMBA ASB and APB. The methodology adopted for the proposed paper is Verilog language with finite state machine models designed in ModelSim Version 10.3 and Xilinx-ISE design suite, version 13.4 is used to extract synthesis, design utilization summary and power reports. For the implementation APB Bridge, arbiter and decoder are designed. In AMBA ASB APB module, master gets into contact with APB bus. Arbiter determines master's status and priority and then, starts communicating with the bus. For selecting a bus slave, decoder uses the accurate address lines and an acknowledgement is given back to the bus master by the slave. An RTL view and an extracted design summary of AMBA ASB APB module at system on chip are shown in result section of the paper. Higher design complexities of SoCs architectures introduce the power consumption into picture. The various power components contribute in the power consumptions which are extracted by the power reports. So, power reports generate a better understanding of the power utilization to the designers. These are clocks total power which consumes of 0.66 mW, hierarchy total power which consumes of 1.05 mW, hierarchy total logical power which consumes of 0.30 mW and hierarchy total signal power which consumes of 0.74 mW powers in the proposed design. Graph is also plotted for clear understanding of the breakdown of powers.",
"title": ""
},
{
"docid": "b56a6ce08cf00fefa1a1b303ebf21de9",
"text": "Freesound is an online collaborative sound database where people with diverse interests share recorded sound samples under Creative Commons licenses. It was started in 2005 and it is being maintained to support diverse research projects and as a service to the overall research and artistic community. In this demo we want to introduce Freesound to the multimedia community and show its potential as a research resource. We begin by describing some general aspects of Freesound, its architecture and functionalities, and then explain potential usages that this framework has for research applications.",
"title": ""
},
{
"docid": "370e1428067483a4a0871cedb5aef639",
"text": "Interactive Game-Based Learning might be used to raise the awareness of students concerning questions of sustainability. Sustainability is a very complex topic. By interacting with a simulation game, students can get a more detailed and holistic conception of how sustainability can be achieved in everyday purchasing situations. The SuLi (Sustainable Living) game was developed to achieve this goal. In an evaluation study we found evidence that SuLi is an interesting alternative to more traditional approaches to learning. Nevertheless, there are still many open questions, as, e.g., whether one should combine simulation games with other forms of teaching and learning or how to design simulation games so that students really acquire detailed concepts of the domain.",
"title": ""
},
{
"docid": "009f83c48787d956b8ee79c1d077d825",
"text": "Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.",
"title": ""
},
{
"docid": "382ee4c7c870f9d05dee5546a664c553",
"text": "Models based on the bivariate Poisson distribution are used for modelling sports data. Independent Poisson distributions are usually adopted to model the number of goals of two competing teams. We replace the independence assumption by considering a bivariate Poisson model and its extensions. The models proposed allow for correlation between the two scores, which is a plausible assumption in sports with two opposing teams competing against each other. The effect of introducing even slight correlation is discussed. Using just a bivariate Poisson distribution can improve model fit and prediction of the number of draws in football games.The model is extended by considering an inflation factor for diagonal terms in the bivariate joint distribution.This inflation improves in precision the estimation of draws and, at the same time, allows for overdispersed, relative to the simple Poisson distribution, marginal distributions. The properties of the models proposed as well as interpretation and estimation procedures are provided. An illustration of the models is presented by using data sets from football and water-polo.",
"title": ""
},
{
"docid": "450b6ce3f24cbab0a7fb718a9d0e9bea",
"text": "A new level shifter used in multiple voltage digital circuits is presented. It combines the merit of conventional level shifter and single supply level shifter, which can shifter any voltage level signal to a desired higher level with low leakage current. The circuits was designed in 180nm CMOS technology and simulated in SPICE. The simulation results showed that the proposed level shifter circuit has 36% leakage power dissipation reduction compared to the conventional level shifter",
"title": ""
},
{
"docid": "2e6c8d94c988ec48ef3dccaf8a4ff7e7",
"text": "We present a photometric stereo method for non-diffuse materials that does not require an explicit reflectance model or reference object. By computing a data-dependent rotation of RGB color space, we show that the specular reflection effects can be separated from the much simpler, diffuse (approximately Lambertian) reflection effects for surfaces that can be modeled with dichromatic reflectance. Images in this transformed color space are used to obtain photometric reconstructions that are independent of the specular reflectance. In contrast to other methods for highlight removal based on dichromatic color separation (e.g., color histogram analysis and/or polarization), we do not explicitly recover the specular and diffuse components of an image. Instead, we simply find a transformation of color space that yields more direct access to shape information. The method is purely local and is able to handle surfaces with arbitrary texture.",
"title": ""
},
{
"docid": "a7287ea0f78500670fb32fc874968c54",
"text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.",
"title": ""
},
{
"docid": "e6b9a05ecc3fd48df50aa769ce05b6a6",
"text": "This paper presents an interactive exoskeleton device for hand rehabilitation, iHandRehab, which aims to satisfy the essential requirements for both active and passive rehabilitation motions. iHandRehab is comprised of exoskeletons for the thumb and index finger. These exoskeletons are driven by distant actuation modules through a cable/sheath transmission mechanism. The exoskeleton for each finger has 4 degrees of freedom (DOF), providing independent control for all finger joints. The joint motion is accomplished by a parallelogram mechanism so that the joints of the device and their corresponding finger joints have the same angular displacement when they rotate. Thanks to this design, the joint angles can be measured by sensors real time and high level motion control is therefore made very simple without the need of complicated kinematics. The paper also discusses important issues when the device is used by different patients, including its adjustable joint range of motion (ROM) and adjustable range of phalanx length (ROPL). Experimentally collected data show that the achieved ROM is close to that of a healthy hand and the ROPL covers the size of a typical hand, satisfying the size need of regular hand rehabilitation. In order to evaluate the performance when it works as a haptic device in active mode, the equivalent moment of inertia (MOI) of the device is calculated. The results prove that the device has low inertia which is critical in order to obtain good backdrivability. Experimental analysis shows that the influence of friction accounts for a large portion of the driving torque and warrants future investigation.",
"title": ""
}
] |
scidocsrr
|
87fa281fc1b05466979cc4b3577e5e96
|
From Shapeshifter to Lava Monster : Gender Stereotypes in Disney ’ s Moana
|
[
{
"docid": "6f1d7e2faff928c80898bfbf05ac0669",
"text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.",
"title": ""
}
] |
[
{
"docid": "2b169a32d20bb4af5527be41837f17f7",
"text": "This paper introduces a two-switch flyback-forward pulse-width modulated (PWM) DC-DC converter along with the steady-state analysis, simplified design procedure, and experimental verification. The proposed converter topology is the result of integrating the secondary sides of the two-switch flyback and the two-switch forward converters in an anti-parallel connection, while retaining the two-main switches and the clamping diodes on a single winding primary side. The hybrid two-switch flyback-forward converter shares the semiconductor devices on the primary side and the magnetic component on the secondary side resulting in a low volume DC-DC converter with reduced switch voltage stress. Simulation and experimental results are given for a 10-V/30-W, 100 kHz laboratory prototype to verify the theoretical analysis.",
"title": ""
},
{
"docid": "aae7c62819cb70e21914486ade94a762",
"text": "From failure experience on power transformers very often it was suspected that inrush currents, occurring when energizing unloaded transformers, were the reason for damage. In this paper it was investigated how mechanical forces within the transformer coils build up under inrush compared to those occurring at short circuit. 2D and 3D computer modeling for a real 268 MVA, 525/17.75 kV three-legged step up transformer were employed. The results show that inrush current peaks of 70% of the rated short circuit current cause local forces in the same order of magnitude as those at short circuit. The resulting force summed up over the high voltage coil is even three times higher. Although inrush currents are normally smaller, the forces can have similar amplitudes as those at short circuit, with longer exposure time, however. Therefore, care has to be taken to avoid such high inrush currents. Today controlled switching offers an elegant and practical solution.",
"title": ""
},
{
"docid": "0fcefddfe877b804095838eb9de9581d",
"text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.",
"title": ""
},
{
"docid": "857e9430ebc5cf6aad2737a0ce10941e",
"text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.",
"title": ""
},
{
"docid": "95d1a35068e7de3293f8029e8b8694f9",
"text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.",
"title": ""
},
{
"docid": "4cde522275c034a8025c75d144a74634",
"text": "Novel sentence detection aims at identifying novel information from an incoming stream of sentences. Our research applies named entity recognition (NER) and part-of-speech (POS) tagging on sentence-level novelty detection and proposes a mixed method to utilize these two techniques. Furthermore, we discuss the performance when setting different history sentence sets. Experimental results of different approaches on TREC'04 Novelty Track show that our new combined method outperforms some other novelty detection methods in terms of precision and recall. The experimental observations of each approach are also discussed.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "1982db485fbef226a5a1b839fa9bf12e",
"text": "The photopigment in the human eye that transduces light for circadian and neuroendocrine regulation, is unknown. The aim of this study was to establish an action spectrum for light-induced melatonin suppression that could help elucidate the ocular photoreceptor system for regulating the human pineal gland. Subjects (37 females, 35 males, mean age of 24.5 +/- 0.3 years) were healthy and had normal color vision. Full-field, monochromatic light exposures took place between 2:00 and 3:30 A.M. while subjects' pupils were dilated. Blood samples collected before and after light exposures were quantified for melatonin. Each subject was tested with at least seven different irradiances of one wavelength with a minimum of 1 week between each nighttime exposure. Nighttime melatonin suppression tests (n = 627) were completed with wavelengths from 420 to 600 nm. The data were fit to eight univariant, sigmoidal fluence-response curves (R(2) = 0.81-0.95). The action spectrum constructed from these data fit an opsin template (R(2) = 0.91), which identifies 446-477 nm as the most potent wavelength region providing circadian input for regulating melatonin secretion. The results suggest that, in humans, a single photopigment may be primarily responsible for melatonin suppression, and its peak absorbance appears to be distinct from that of rod and cone cell photopigments for vision. The data also suggest that this new photopigment is retinaldehyde based. These findings suggest that there is a novel opsin photopigment in the human eye that mediates circadian photoreception.",
"title": ""
},
{
"docid": "a70d064af5e8c5842b8ca04abc3fb2d6",
"text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.",
"title": ""
},
{
"docid": "170e2b0f15d9485bb3c00026c6c384a8",
"text": "Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument; many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.",
"title": ""
},
{
"docid": "8244bb1d75e550beb417049afb1ff9d5",
"text": "Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document’s content. For these kinds of data-rich, multiple-record documents (e.g. advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology—a conceptual model instance—that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.",
"title": ""
},
{
"docid": "4ecb2bd91312598428745851cac90d64",
"text": "In large parking area attached to shopping malls and so on, it is difficult to find a vacant parking space. In addition, searching for parking space during long time leads to drivers stress and wasteful energy loss. In order to solve these problems, the navigation system in parking area by using ZigBee networks is proposed in this paper. The ZigBee is expected to realize low power consumption wireless system with low cost. Moreover, the ZigBee can form ad-hoc network easily and more than 65000 nodes can connect at the same time. Therefore, it is suitable for usage in the large parking area. In proposed system, the shortest route to the vacant parking space is transmitted to the own vehicle by the ZigBee ad-hoc network. Thus, the efficient guide is provided to the drivers. To show the effectiveness of the proposed parking system, the average time for arrival in the parking area is evaluated, and the performance of the vehicles that equips the ZigBee terminals is compared with the ordinary vehicles that do not equip the ZigBee terminals.",
"title": ""
},
{
"docid": "c998270736000da12e509103af2c70ec",
"text": "Flash memory grew from a simple concept in the early 1980s to a technology that generated close to $23 billion in worldwide revenue in 2007, and this represents one of the many success stories in the semiconductor industry. This success was made possible by the continuous innovation of the industry along many different fronts. In this paper, the history, the basic science, and the successes of flash memories are briefly presented. Flash memories have followed the Moore’s Law scaling trend for which finer line widths, achieved by improved lithographic resolution, enable more memory bits to be produced for the same silicon area, reducing cost per bit. When looking toward the future, significant challenges exist to the continued scaling of flash memories. In this paper, I discuss possible areas that need development in order to overcome some of the size-scaling challenges. Innovations are expected to continue in the industry, and flash memories will continue to follow the historical trend in cost reduction of semiconductor memories through the rest of this decade.",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "e9b036925d05faa55b55ec8711715296",
"text": "Chest X-rays is one of the most commonly available and affordable radiological examinations in clinical practice. While detecting thoracic diseases on chest X-rays is still a challenging task for machine intelligence, due to 1) the highly varied appearance of lesion areas on X-rays from patients of different thoracic disease and 2) the shortage of accurate pixel-level annotations by radiologists for model training. Existing machine learning methods are unable to deal with the challenge that thoracic diseases usually happen in localized disease-specific areas. In this article, we propose a weakly supervised deep learning framework equipped with squeeze-and-excitation blocks, multi-map transfer and max-min pooling for classifying common thoracic diseases as well as localizing suspicious lesion regions on chest X-rays. The comprehensive experiments and discussions are performed on the ChestX-ray14 dataset. Both numerical and visual results have demonstrated the effectiveness of proposed model and its better performance against the state-of-the-art pipelines.",
"title": ""
},
{
"docid": "940e7dc630b7dcbe097ade7abb2883a4",
"text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.",
"title": ""
},
{
"docid": "fb7961117dae98e770e0fe84c33673b9",
"text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).",
"title": ""
},
{
"docid": "26b0fd17e691a1a95e4c08aa53167b43",
"text": "We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student’s performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.",
"title": ""
},
{
"docid": "428c480be4ae3d2043c9f5485087c4af",
"text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.",
"title": ""
}
] |
scidocsrr
|
c21dbc365f1389c48f46aefc1c982337
|
Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering
|
[
{
"docid": "44c0237251d54d6ccccd883bf14c6ff6",
"text": "In this paper, we propose a new method for indexing large amounts of point and spatial data in highdimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap. Our experiments show that for high-dimensional data, the X-tree outperforms the well-known R*-tree and the TV-tree by up to two orders of magnitude.",
"title": ""
},
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "0e644fc1c567356a2e099221a774232c",
"text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.",
"title": ""
}
] |
[
{
"docid": "c15093ead030ba1aa020a99c312109fa",
"text": "Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.",
"title": ""
},
{
"docid": "1e865bd59571b6c1b1012f229efde437",
"text": "Do we really need 3D labels in order to learn how to predict 3D? In this paper, we show that one can learn a mapping from appearance to 3D properties without ever seeing a single explicit 3D label. Rather than use explicit supervision, we use the regularity of indoor scenes to learn the mapping in a completely unsupervised manner. We demonstrate this on both a standard 3D scene understanding dataset as well as Internet images for which 3D is unavailable, precluding supervised learning. Despite never seeing a 3D label, our method produces competitive results.",
"title": ""
},
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "2ecd815af00b9961259fa9b2a9185483",
"text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.",
"title": ""
},
{
"docid": "da4bac81f8544eb729c7e0aafe814927",
"text": "This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations – regularization, depth and fine-tuning – each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features – a remarkable 512× compression.",
"title": ""
},
{
"docid": "c7048e00cdb56e2f1085d23b9317c147",
"text": "`Design-for-Assembly (DFA)\" is an engineering concept concerned with improving product designs for easier and less costly assembly operations. Much of academic or industrial eeorts in this area have been devoted to the development of analysis tools for measuring the \\assemblability\" of a design. On the other hand, little attention has been paid to the actual redesign process. The goal of this paper is to develop a computer-aided tool for assisting designers in redesigning a product for DFA. One method of redesign, known as the \\replay and modify\" paradigm, is to replay a previous design plan, and modify the plan wherever necessary and possible, in accordance to the original design intention, for newly speciied design goals 24]. The \\replay and modify\" paradigm is an eeective redesign method because it ooers a more global solution than simple local patch-ups. For such a paradigm, design information, such as the design plan and design rationale, must be recorded during design. Unfortunately, such design information is not usually available in practice. To handle the potential absence of the required design information and support the \\replay and modify\" paradigm, the redesign process is modeled as a reverse engineering activity. Reverse engineering roughly refers to an activity of inferring the process, e.g. the design plan, used in creating a given design, and using the inferred knowledge for design recreation or redesign. In this paper, the development of an interactive computer-aided redesign tool for Design-for-Assembly, called REVENGE (REVerse ENGineering), is presented. The architecture of REVENGE is composed of mainly four activities: design analysis, knowledge acquisition, design plan reconstruction, and case-based design modiication. First a DFA analysis is performed to uncover any undesirable aspects of the design with respect to its assemblability. REVENGE , then, interactively solicits designers for useful design information that might not be available from standard design documents such as design rationale. Then, a heuristic algorithm reconstructs a default design plan. A default design plan is a sequence of probable design actions that might have led to the original design. DFA problems identiied during the analysis stage are mapped to the portion of the design plan from which they might have originated. Problems that originate from the earlier portion of the design plan are attacked rst. A case-based approach is used to solve each problem by retrieving a similar redesign case and adapting it to the current situation. REVENGE has been implemented, and has been tested …",
"title": ""
},
{
"docid": "109cf07cb1c8fcfbd6979922d3eee381",
"text": "—Presently, information retrieval can be accomplished simply and rapidly with the use of search engines. This allows users to specify the search criteria as well as specific keywords to obtain the required results. Additionally, an index of search engines has to be updated on most recent information as it is constantly changed over time. Particularly, information retrieval results as documents are typically too extensive, which affect on accessibility of the required results for searchers. Consequently, a similarity measurement between keywords and index terms is essentially performed to facilitate searchers in accessing the required results promptly. Thus, this paper proposed the similarity measurement method between words by deploying Jaccard Coefficient. Technically, we developed a measure of similarity Jaccard with Prolog programming language to compare similarity between sets of data. Furthermore, the performance of this proposed similarity measurement method was accomplished by employing precision, recall, and F-measure. Precisely, the test results demonstrated the awareness of advantage and disadvantages of the measurement which were adapted and applied to a search for meaning by using Jaccard similarity coefficient.",
"title": ""
},
{
"docid": "9d3c3a3fa17f47da408be1e24d2121cc",
"text": "In this letter, compact substrate integrated waveguide (SIW) power dividers are presented. Both equal and unequal power divisions are considered. A quarter-wavelength long wedge shape SIW structure is used for the power division. Direct coaxial feed is used for the input port and SIW-tomicrostrip transitions are used for the output ports. Four-way equal, unequal and an eight-way equal division power dividers are presented. The four-way and the eight-way power dividers provide -10 dB input matching bandwidth of 39.3% and 13%, respectively, at the design frequency f0 = 2.4 GHz. The main advantage of the power dividers is their compact sizes. Including the microstrip to SIW transitions, size is reduced by at least 46% compared to other reported miniaturized SIW power dividers.",
"title": ""
},
{
"docid": "fd63f9b9454358810a68fc003452509b",
"text": "The years that students spend in college are perhaps the most influential years on the rest of their lives. College students face many different decisions day in and day out that may determine how successful they will be in the future. They will choose majors, whether or not to play a sport, which clubs to join, whether they should join a fraternity or sorority, which classes to take, and how much time to spend studying. It is unclear what aspects of college will benefit a person the most down the road. Are some majors better than others? Is earning a high GPA important? Or will simply getting a degree be enough to make a good living? These are a few of the many questions that college students have.",
"title": ""
},
{
"docid": "1a5d9971b674a8d54a0aae7091b02aff",
"text": "Controlling the electric appliances is the essential technique in the home automation, and wireless communication between the residence gateway and electric appliances is one of the most important parts in the home network system. In these days, most of the electric appliances are controlled by infrared remote controllers. However, it is very difficult to connect most of the electric appliances to a home network, since the communication protocols are different. In this paper, we propose an integrated remote controller to control electric appliances in the home network with no extra attachment of communication device to the appliances using ZigBee protocol and infrared remote controller technology. The integrated remote controller system for home automation is composed of integrated remote controller, ZigBee to infrared converter, and ZigBee power adapter. ZigBee power adapter is introduced for some appliances which do not have even infrared remote device to be connected in home network. This paper presents a prototype of the proposed system and shows a scheme for the implementation. It provides high flexibility for the users to configure and manage a home network in order to control electric appliances.",
"title": ""
},
{
"docid": "ce1e222bae70cdc4ac22189e4fd9c69f",
"text": "In the era of big data, the amount of data that individuals and enterprises hold is increasing, and the efficiency and effectiveness of data analysis are increasingly demanding. Collaborative deep learning, as a machine learning framework that can share users' data and improve learning efficiency, has drawn more and more attention and started to be applied in practical problems. In collaborative deep learning, data sharing and interaction among multi users may lead data leakage especially when data are very sensitive to the user. Therefore, how to protect the data privacy when processing collaborative deep learning becomes an important problem. In this paper, we review the current state of art researches in this field and summarize the application of privacy-preserving technologies in two phases of collaborative deep learning. Finally we discuss the future direction and trend on this problem.",
"title": ""
},
{
"docid": "1aede573b82b9776ac4e4db11cef4157",
"text": "In this work, we have designed and implemented a microcontroller-based embedded system for blood pressure monitoring through a PhotoPlethysmoGraphic (PPG) technique. In our system, it is possible to perform PPG measurements via reflectance mode. Hardware novelty of our system consists in the adoption of Silicon PhotoMultiplier detectors. The signal received from the photodetector is used to calculate the instantaneous heart rate and therefore the heart rate variability. The obtained results show that, by using our system, it is possible to easily extract both the PPG and the breath signal. These signals can be used to monitor the patients during the convalescence both in hospital and at home.",
"title": ""
},
{
"docid": "c19658ecdae085902d936f615092fbe5",
"text": "Predicting student attrition is an intriguing yet challenging problem for any academic institution. Classimbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "39a44520b1df1529ea7f89335fc6a19c",
"text": "An area-efficient cross feedforward cascode compensation (CFCC) technique is presented for a three-stage amplifier. The proposed amplifier is capable of driving heavy capacitive load at low power consumption but not dedicated to heavy load currents or heavy resistive loading. The CFCC technique enables the nondominant complex poles of the amplifier to be located at high frequencies, resulting in bandwidth extension. The amplifier can be stabilized with a cascode compensation capacitor of only 1.15 pF when driving a 500-pF capacitive load, greatly reducing the overall area of the amplifier. In addition, the presence of two left-hand-plane (LHP) zeros in the proposed scheme improves the phase margin and relaxes the stability criteria. The proposed technique has been implemented and fabricated in a UMC 65-nm CMOS process and it achieves a 2-MHz gain-bandwidth product (GBW) when driving a 500-pF capacitive load by consuming only 20.4 μW at a 1.2-V supply. The proposed compensation technique compares favorably in terms of figures-of-merit (FOM) to previously reported works. Most significantly, the CFCC amplifier achieves the highest load capacitance to total compensation capacitance ratio (CL/CT) of all its counterparts.",
"title": ""
},
{
"docid": "814c69ae155f69ee481255434039b00c",
"text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.",
"title": ""
},
{
"docid": "fa7177c3e65ea78911a953ef75c7cdac",
"text": "Schizophrenia for many patients is a lifelong mental disorder with significant consequences on most functional domains. One fifth to one third of patients with schizophrenia experience persistent psychotic symptoms despite adequate trials of antipsychotic treatment, and are considered to have treatment-resistant schizophrenia (TRS). Clozapine is the only medication to demonstrate efficacy for psychotic symptoms in such patients. However, clozapine is not effective in 40%-70% of patients with TRS and it has significant limitations in terms of potentially life-threatening side effects and the associated monitoring. Accordingly, a number of pharmacological and non-pharmacological biological approaches for clozapine-resistant TRS have emerged. This article provides a brief updated critical review of recent therapeutic strategies for TRS, particularly for clozapine-resistant TRS, which include pharmacotherapy, electroconvulsive therapy, repetitive transcranial magnetic stimulation, and transcranial direct current stimulation.",
"title": ""
},
{
"docid": "573bc5d62ce73cd2dc352bece75cedcf",
"text": "Software deobfuscation is a crucial activity in security analysis and especially, in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed has an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses certain kinds of questions encountered by a reverser namely feasibility questions. Many issues arising during reverse, e.g. detecting protection schemes such as opaque predicates fall into the category of infeasibility questions. In this article, we present the Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware – allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.",
"title": ""
},
{
"docid": "0414688abd9c2471bbcbe06a56b134ca",
"text": "We provide new theoretical insights on why overparametrization is effective in learning neural networks. For a k hidden node shallow network with quadratic activation and n training data points, we show as long as k ≥ √ 2n, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when k ≥ √ 2n, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.",
"title": ""
},
{
"docid": "b4409a8e8a47bc07d20cebbfaccb83fd",
"text": "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.",
"title": ""
},
{
"docid": "9eacc5f0724ff8fe2152930980dded4b",
"text": "A computer-controlled adjustable nanosecond pulse generator based on high-voltage MOSFET is designed in this paper, which owns stable performance and miniaturization profile of 32×30×7 cm3. The experiment results show that the pulser can generate electrical pulse with Gaussian rising time of 20 nanosecond, section-adjustable index falling time of 40–200 nanosecond, continuously adjustable repitition frequency of 0–5 kHz, quasi-continuously adjustable amplitude of 0–1 kV at 50 Ω load. And the pulser could meet the requiremen.",
"title": ""
}
] |
scidocsrr
|
dfa37f61a1e9fd66981f5ad550705234
|
Visualizing Bitcoin Flows of Ransomware: WannaCry One Week Later
|
[
{
"docid": "32ca9711622abd30c7c94f41b91fa3f6",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
}
] |
[
{
"docid": "8ebff9573757d0b79236b35e42a3a7c6",
"text": "Joint multichannel enhancement and acoustic modeling using neural networks has shown promise over the past few years. However, one shortcoming of previous work [1, 2, 3] is that the filters learned during training are fixed for decoding, potentially limiting the ability of these models to adapt to previously unseen or changing conditions. In this paper we explore a neural network adaptive beamforming (NAB) technique to address this issue. Specifically, we use LSTM layers to predict time domain beamforming filter coefficients at each input frame. These filters are convolved with the framed time domain input signal and summed across channels, essentially performing FIR filter-andsum beamforming using the dynamically adapted filter. The beamformer output is passed into a waveform CLDNN acoustic model [4] which is trained jointly with the filter prediction LSTM layers. We find that the proposed NAB model achieves a 12.7% relative improvement in WER over a single channel model [4] and reaches similar performance to a “factored” model architecture which utilizes several fixed spatial filters [3] on a 2,000-hour Voice Search task, with a 17.9% decrease in computational cost.",
"title": ""
},
{
"docid": "62b2daec701f43a3282076639d01e475",
"text": "Several hundred plant and herb species that have potential as novel antiviral agents have been studied, with surprisingly little overlap. A wide variety of active phytochemicals, including the flavonoids, terpenoids, lignans, sulphides, polyphenolics, coumarins, saponins, furyl compounds, alkaloids, polyines, thiophenes, proteins and peptides have been identified. Some volatile essential oils of commonly used culinary herbs, spices and herbal teas have also exhibited a high level of antiviral activity. However, given the few classes of compounds investigated, most of the pharmacopoeia of compounds in medicinal plants with antiviral activity is still not known. Several of these phytochemicals have complementary and overlapping mechanisms of action, including antiviral effects by either inhibiting the formation of viral DNA or RNA or inhibiting the activity of viral reproduction. Assay methods to determine antiviral activity include multiple-arm trials, randomized crossover studies, and more compromised designs such as nonrandomized crossovers and pre- and post-treatment analyses. Methods are needed to link antiviral efficacy/potency- and laboratory-based research. Nevertheless, the relative success achieved recently using medicinal plant/herb extracts of various species that are capable of acting therapeutically in various viral infections has raised optimism about the future of phyto-antiviral agents. As this review illustrates, there are innumerable potentially useful medicinal plants and herbs waiting to be evaluated and exploited for therapeutic applications against genetically and functionally diverse viruses families such as Retroviridae, Hepadnaviridae and Herpesviridae",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "b72bc9ee1c32ec3d268abd1d3e51db25",
"text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.",
"title": ""
},
{
"docid": "e3537eb7ab5da891aea70306c548f8c6",
"text": "In recent era of ubiquitous computing the internet of things and sensor networks are researched widely. The deployment of the wireless sensor networks in the harsh environments ascends issues associated with delay clustering approaches, packet drop, delay, energy, link quality, mobility and coverage. Various research studies are proposing routing protocols clustering algorithm with research goal for reduction in terms of energy and delay. This paper focuses on delay and energy by introducing threshold based scheme. Furthermore energy and delay efficient routing protocol is proposed for cluster head selection in the heterogeneous wireless sensor networks. We have introduced delay and energy based adaptive threshold scheme in this paper to solve this problem. Furthermore this study presents new routing algorithm which contains energy and delay and velocity threshold based cluster-head election scheme. The cluster head is selected according to distance, velocity and energy where probability is set for the residual energy. The nodes are classified into normal, advanced and herculean levels. This paper presents new routing protocol named as energy and delay efficient routing protocol (EDERP). The MATLAB is used for simulation and comparison of the routing protocol with other protocols. The simulations results indicate that this protocol is effective regarding delay and energy.",
"title": ""
},
{
"docid": "ff72ade7fdfba55c0f6ab7b5f8b74eb7",
"text": "Automatic detection of facial features in an image is important stage for various facial image interpretation work, such as face recognition, facial expression recognition, 3Dface modeling and facial features tracking. Detection of facial features like eye, pupil, mouth, nose, nostrils, lip corners, eye corners etc., with different facial expression and illumination is a challenging task. In this paper, we presented different methods for fully automatic detection of facial features. Viola-Jones' object detector along with haar-like cascaded features are used to detect face, eyes and nose. Novel techniques using the basic concepts of facial geometry, are proposed to locate the mouth position, nose position and eyes position. The estimation of detection region for features like eye, nose and mouth enhanced the detection accuracy significantly. An algorithm, using the H-plane of the HSV color space is proposed for detecting eye pupil from the eye detected region. FEI database of frontal face images is mainly used to test the algorithm. Proposed algorithm is tested over 100 frontal face images with two different facial expression (neutral face and smiling face). The results obtained are found to be 100% accurate for lip, lip corners, nose and nostrils detection. The eye corners, and eye pupil detection is giving approximately 95% accurate results.",
"title": ""
},
{
"docid": "3d93c45e2374a7545c6dff7de0714352",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "425bbea2a6aff317c83e73738bca89ed",
"text": "Classical rate-distortion theory requires specifying a source distribution. Instead, we analyze rate-distortion properties of individual objects using the recently developed algorithmic rate-distortion theory. The latter is based on the noncomputable notion of Kolmogorov complexity. To apply the theory we approximate the Kolmogorov complexity by standard data compression techniques, and perform a number of experiments with lossy compression and denoising of objects from different domains. We also introduce a natural generalization to lossy compression with side information. To maintain full generality we need to address a difficult searching problem. While our solutions are therefore not time efficient, we do observe good denoising and compression performance.",
"title": ""
},
{
"docid": "dc41eb4913c47c4b64d3ca4c1dac6e8d",
"text": "Applied Geostatistics with SGeMS: A User's Guide PetraSim: A graphical user interface for the TOUGH2 family of multiphase flow and transport codes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Treatise on Fungi as Experimental Systems for Basic and Applied Research. Baixe grátis o arquivo SGeMS User's Guide enviado para a disciplina de Applied Geostatistics with SGeMS: A Users' Guide · S-GeMS Tutorial Notes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Leone: Introduction to Stochastic Calculus Applied to Finance, Second Edition. Build Native Cross-Platform Apps with Appcelerator: A beginner's guide for Web Developers Applied GeostAtistics with SGeMS: A User's guide (Repost).",
"title": ""
},
{
"docid": "4f686e9f37ec26070d0d280b98f78673",
"text": "State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.",
"title": ""
},
{
"docid": "0958001e0a54cd0d3dc20864e65cf2a8",
"text": "Credit card fraud resulted in the loss of $3 billion to North American financial institutions in 2017. The rise of digital payments systems such as Apple Pay, Android Pay, and Venmo has meant that loss due to fraudulent activity is expected to increase. Deep Learning presents a promising solution to the problem of credit card fraud detection by enabling institutions to make optimal use of their historic customer data as well as real-time transaction details that are recorded at the time of the transaction. In 2017, a study found that a Deep Learning approach provided comparable results to prevailing fraud detection methods such as Gradient Boosted Trees and Logistic Regression. However, Deep Learning encompasses a number of topologies. Additionally, the various parameters used to construct the model (e.g. the number of neurons in the hidden layer of a neural network) also influence its results. In this paper, we evaluate a subsection of Deep Learning topologies — from the general artificial neural network to topologies with built-in time and memory components such as Long Short-term memory — and different parameters with regard to their efficacy in fraud detection on a dataset of nearly 80 million credit card transactions that have been pre-labeled as fraudulent and legitimate. We utilize a high performance, distributed cloud computing environment to navigate past common fraud detection problems such as class imbalance and scalability. Our analysis provides a comprehensive guide to sensitivity analysis of model parameters with regard to performance in fraud detection. We also present a framework for parameter tuning of Deep Learning topologies for credit card fraud detection to enable financial institutions to reduce losses by preventing fraudulent activity.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "68489ec6e39ffd95d5df7d6817474cde",
"text": "Foster B-trees are a new variant of B-trees that combines advantages of prior B-tree variants optimized for many-core processors and modern memory hierarchies with flash storage and nonvolatile memory. Specific goals include: (i) minimal concurrency control requirements for the data structure, (ii) efficient migration of nodes to new storage locations, and (iii) support for continuous and comprehensive self-testing. Like Blink-trees, Foster B-trees optimize latching without imposing restrictions or specific designs on transactional locking, for example, key range locking. Like write-optimized B-trees, and unlike Blink-trees, Foster B-trees enable large writes on RAID and flash devices as well as wear leveling and efficient defragmentation. Finally, they support continuous and inexpensive yet comprehensive verification of all invariants, including all cross-node invariants of the B-tree structure. An implementation and a performance evaluation show that the Foster B-tree supports high concurrency and high update rates without compromising consistency, correctness, or read performance.",
"title": ""
},
{
"docid": "9f84630422777d869edd7167ff6da443",
"text": "Video surveillance, closed-circuit TV and IP-camera systems became virtually omnipresent and indispensable for many organizations, businesses, and users. Their main purpose is to provide physical security, increase safety, and prevent crime. They also became increasingly complex, comprising many communication means, embedded hardware and non-trivial firmware. However, most research to date focused mainly on the privacy aspects of such systems, and did not fully address their issues related to cyber-security in general, and visual layer (i.e., imagery semantics) attacks in particular. In this paper, we conduct a systematic review of existing and novel threats in video surveillance, closed-circuit TV and IP-camera systems based on publicly available data. The insights can then be used to better understand and identify the security and the privacy risks associated with the development, deployment and use of these systems. We study existing and novel threats, along with their existing or possible countermeasures, and summarize this knowledge into a comprehensive table that can be used in a practical way as a security checklist when assessing cyber-security level of existing or new CCTV designs and deployments. We also provide a set of recommendations and mitigations that can help improve the security and privacy levels provided by the hardware, the firmware, the network communications and the operation of video surveillance systems. We hope the findings in this paper will provide a valuable knowledge of the threat landscape that such systems are exposed to, as well as promote further research and widen the scope of this field beyond its current boundaries.",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "4e0ff4875a4dff6863734c964db54540",
"text": "We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books (“Anonymous audio book service”), Mobile Apps, Video and Music. It produces recommendations based on user consumption history: purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and autoencoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories. Finally, we show online A/B test results, discuss key improvements to the neural network model, and describe our production pipeline.",
"title": ""
},
{
"docid": "76c6ad5e97d5296a9be841c3d3552a27",
"text": "In fish as in mammals, virus infections induce changes in the expression of many host genes. Studies conducted during the last fifteen years revealed a major contribution of the interferon system in fish antiviral response. This review describes the screening methods applied to compare the impact of virus infections on the transcriptome in different fish species. These approaches identified a \"core\" set of genes that are strongly induced in most viral infections. The \"core\" interferon-induced genes (ISGs) are generally conserved in vertebrates, some of them inhibiting a wide range of viruses in mammals. A selection of ISGs -PKR, vig-1/viperin, Mx, ISG15 and finTRIMs - is further analyzed here to illustrate the diversity and complexity of the mechanisms involved in establishing an antiviral state. Most of the ISG-based pathways remain to be directly determined in fish. Fish ISGs are often duplicated and the functional specialization of multigenic families will be of particular interest for future studies.",
"title": ""
},
{
"docid": "e089c8d35bd77e1947d11207a7905617",
"text": "Real-time monitoring of groups and their rich contexts will be a key building block for futuristic, group-aware mobile services. In this paper, we propose GruMon, a fast and accurate group monitoring system for dense and complex urban spaces. GruMon meets the performance criteria of precise group detection at low latencies by overcoming two critical challenges of practical urban spaces, namely (a) the high density of crowds, and (b) the imprecise location information available indoors. Using a host of novel features extracted from commodity smartphone sensors, GruMon can detect over 80% of the groups, with 97% precision, using 10 minutes latency windows, even in venues with limited or no location information. Moreover, in venues where location information is available, GruMon improves the detection latency by up to 20% using semantic information and additional sensors to complement traditional spatio-temporal clustering approaches. We evaluated GruMon on data collected from 258 shopping episodes from 154 real participants, in two large shopping complexes in Korea and Singapore. We also tested GruMon on a large-scale dataset from an international airport (containing ≈37K+ unlabelled location traces per day) and a live deployment at our university, and showed both GruMon's potential performance at scale and various scalability challenges for real-world dense environment deployments.",
"title": ""
},
{
"docid": "9b4ffbbcd97e94524d2598cd862a400a",
"text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.",
"title": ""
},
{
"docid": "5b92aa85d93c2fbb09df5a0b96fc9c1f",
"text": "Social networking services have been prevalent at many online communities such as Twitter.com and Weibo.com, where millions of users keep interacting with each other every day. One interesting and important problem in the social networking services is to rank users based on their vitality in a timely fashion. An accurate ranking list of user vitality could benefit many parties in social network services such as the ads providers and site operators. Although it is very promising to obtain a vitality-based ranking list of users, there are many technical challenges due to the large scale and dynamics of social networking data. In this paper, we propose a unique perspective to achieve this goal, which is quantifying user vitality by analyzing the dynamic interactions among users on social networks. Examples of social network include but are not limited to social networks in microblog sites and academical collaboration networks. Intuitively, if a user has many interactions with his friends within a time period and most of his friends do not have many interactions with their friends simultaneously, it is very likely that this user has high vitality. Based on this idea, we develop quantitative measurements for user vitality and propose our first algorithm for ranking users based vitality. Also, we further consider the mutual influence between users while computing the vitality measurements and propose the second ranking algorithm, which computes user vitality in an iterative way. Other than user vitality ranking, we also introduce a vitality prediction problem, which is also of great importance for many applications in social networking services. Along this line, we develop a customized prediction model to solve the vitality prediction problem. To evaluate the performance of our algorithms, we collect two dynamic social network data sets. The experimental results with both data sets clearly demonstrate the advantage of our ranking and prediction methods.",
"title": ""
}
] |
scidocsrr
|
7cb9a42193b0eb31d61a415b67ed3363
|
Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance
|
[
{
"docid": "2e99e535f2605e88571407142e4927ee",
"text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.",
"title": ""
},
{
"docid": "4cd09cc6aa67d1314ca5de09d1240b65",
"text": "A new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied. We propose a new \"normalized information distance\", based on the noncomputable notion of Kolmogorov complexity, and show that it minorizes every metric in the class (that is, it is universal in that it discovers all effective similarities). We demonstrate that it too is a metric and takes values in [0, 1]; hence it may be called the similarity metric. This is a theory foundation for a new general practical tool. We give two distinctive applications in widely divergent areas (the experiments by necessity use just computable approximations to the target notions). First, we computationally compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we give fully automatically computed language tree of 52 different language based on translated versions of the \"Universal Declaration of Human Rights\".",
"title": ""
},
{
"docid": "335847313ee670dc0648392c91d8567a",
"text": "Several large scale data mining applications, such as text c ategorization and gene expression analysis, involve high-dimensional data that is also inherentl y directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hyperspher e. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characteri zing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximiza tion (EM) framework for estimating the mean and concentration parameters of this mixture. Nume rical estimation of the concentration parameters is non-trivial in high dimensions since it i nvolves functional inversion of ratios of Bessel functions. We also formulate two clustering algorit hms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis fo r the use of cosine similarity that has been widely employed by the information retrieval communit y, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression d ata based on a mixture of vMF distributions show that the ability to estimate the concentration pa rameter for each vMF component, which is not present in existing approaches, yields superior resu lts, especially for difficult clustering tasks in high-dimensional spaces.",
"title": ""
}
] |
[
{
"docid": "c9e1c4b2a043ba43fbd07b05e8742e41",
"text": "BACKGROUND\nThere has been research on the use of offline video games for therapeutic purposes but online video game therapy is still fairly under-researched. Online therapeutic interventions have only recently included a gaming component. Hence, this review represents a timely first step toward taking advantage of these recent technological and cultural innovations, particularly for the treatment of special-needs groups such as the young, the elderly and people with various conditions such as ADHD, anxiety and autism spectrum disorders.\n\n\nMATERIAL\nA review integrating research findings on two technological advances was conducted: the home computer boom of the 1980s, which triggered a flood of research on therapeutic video games for the treatment of various mental health conditions; and the rise of the internet in the 1990s, which caused computers to be seen as conduits for therapeutic interaction rather than replacements for the therapist.\n\n\nDISCUSSION\nWe discuss how video games and the internet can now be combined in therapeutic interventions, as attested by a consideration of pioneering studies.\n\n\nCONCLUSION\nFuture research into online video game therapy for mental health concerns might focus on two broad types of game: simple society games, which are accessible and enjoyable to players of all ages, and online worlds, which offer a unique opportunity for narrative content and immersive remote interaction with therapists and fellow patients. Both genres might be used for assessment and training purposes, and provide an unlimited platform for social interaction. The mental health community can benefit from more collaborative efforts between therapists and engineers, making such innovations a reality.",
"title": ""
},
{
"docid": "e9dc7d048b53ec9649dec65e05a77717",
"text": "Recent advances in object detection have exploited object proposals to speed up object searching. However, many of existing object proposal generators have strong localization bias or require computationally expensive diversification strategies. In this paper, we present an effective approach to address these issues. We first propose a simple and useful localization bias measure, called superpixel tightness. Based on the characteristics of superpixel tightness distribution, we propose an effective method, namely multi-thresholding straddling expansion (MTSE) to reduce localization bias via fast diversification. Our method is essentially a box refinement process, which is intuitive and beneficial, but seldom exploited before. The greatest benefit of our method is that it can be integrated into any existing model to achieve consistently high recall across various intersection over union thresholds. Experiments on PASCAL VOC dataset demonstrates that our approach improves numerous existing models significantly with little computational overhead.",
"title": ""
},
{
"docid": "bde253462808988038235a46791affc1",
"text": "Power electronic Grid-Connected Converters (GCCs) are widely applied as grid interface in renewable energy sources. This paper proposes an extended Direct Power Control with Space Vector Modulation (DPC-SVM) scheme with improved operation performance under grid distortions. The real-time operated DPC-SVM scheme has to execute several important tasks as: space vector pulse width modulation, active and reactive power feedback control, grid current harmonics and voltage dips compensation. Thus, development and implementation of the DPC-SVM algorithm using single chip floating-point microcontroller TMS320F28335 is described. It combines large peripheral equipment, typical for microcontrollers, with high computation capacity characteristic for Digital Signal Processors (DSPs). The novelty of the proposed system lies in extension of the generic DPC-SVM scheme by additional higher harmonic and voltage dips compensation modules and implementation of the whole algorithm in a single chip floating point microcontroller. Overview of the laboratory setup, description of basic algorithm subtasks sequence, software optimization as well as execution time of specific program modules on fixed-point and floating-point processors are discussed. Selected oscillograms illustrating operation and robustness of the developed algorithm used in 5 kVA laboratory model of the GCC are presented.",
"title": ""
},
{
"docid": "0123fd04bc65b8dfca7ff5c058d087da",
"text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.",
"title": ""
},
{
"docid": "127ba400911644a0a4e2d0f7bbb694b2",
"text": "From autonomous cars and adaptive email-filters to predictive policing systems, machine learning (ML) systems are increasingly ubiquitous; they outperform humans on specific tasks [Mnih et al., 2013, Silver et al., 2016, Hamill, 2017] and often guide processes of human understanding and decisions [Carton et al., 2016, Doshi-Velez et al., 2014]. The deployment of ML systems in complex applications has led to a surge of interest in systems optimized not only for expected task performance but also other important criteria such as safety [Otte, 2013, Amodei et al., 2016, Varshney and Alemzadeh, 2016], nondiscrimination [Bostrom and Yudkowsky, 2014, Ruggieri et al., 2010, Hardt et al., 2016], avoiding technical debt [Sculley et al., 2015], or providing the right to explanation [Goodman and Flaxman, 2016]. For ML systems to be used safely, satisfying these auxiliary criteria is critical. However, unlike measures of performance such as accuracy, these criteria often cannot be completely quantified. For example, we might not be able to enumerate all unit tests required for the safe operation of a semi-autonomous car or all confounds that might cause a credit scoring system to be discriminatory. In such cases, a popular fallback is the criterion of interpretability : if the system can explain its reasoning, we then can verify whether that reasoning is sound with respect to these auxiliary criteria. Unfortunately, there is little consensus on what interpretability in machine learning is and how to evaluate it for benchmarking. Current interpretability evaluation typically falls into two categories. The first evaluates interpretability in the context of an application: if the system is useful in either a practical application or a simplified version of it, then it must be somehow interpretable (e.g. Ribeiro et al. [2016], Lei et al. [2016], Kim et al. [2015a], Doshi-Velez et al. [2015], Kim et al. [2015b]). The second evaluates interpretability via a quantifiable proxy: a researcher might first claim that some model class—e.g. sparse linear models, rule lists, gradient boosted trees—are interpretable and then present algorithms to optimize within that class (e.g. Bucilu et al. [2006], Wang et al. [2017], Wang and Rudin [2015], Lou et al. [2012]). To large extent, both evaluation approaches rely on some notion of “you’ll know it when you see it.” Should we be concerned about a lack of rigor? Yes and no: the notions of interpretability above appear reasonable because they are reasonable: they meet the first test of having facevalidity on the correct test set of subjects: human beings. However, this basic notion leaves many kinds of questions unanswerable: Are all models in all defined-to-be-interpretable model classes equally interpretable? Quantifiable proxies such as sparsity may seem to allow for comparison, but how does one think about comparing a model sparse in features to a model sparse in prototypes? Moreover, do all applications have the same interpretability needs? If we are to move this field forward—to compare methods and understand when methods may generalize—we need to formalize these notions and make them evidence-based. The objective of this review is to chart a path toward the definition and rigorous evaluation of interpretability. The need is urgent: recent European Union regulation will require algorithms",
"title": ""
},
{
"docid": "42b8163ac8544dae2060f903c377b201",
"text": "Cloud storage systems are currently very popular, generating a large amount of traffic. Indeed, many companies offer this kind of service, including worldwide providers such as Dropbox, Microsoft and Google. These companies, as well as new providers entering the market, could greatly benefit from knowing typical workload patterns that their services have to face in order to develop more cost-effective solutions. However, despite recent analyses of typical usage patterns and possible performance bottlenecks, no previous work investigated the underlying client processes that generate workload to the system. In this context, this paper proposes a hierarchical two-layer model for representing the Dropbox client behavior. We characterize the statistical parameters of the model using passive measurements gathered in 3 different network vantage points. Our contributions can be applied to support the design of realistic synthetic workloads, thus helping in the development and evaluation of new, well-performing personal cloud storage services.",
"title": ""
},
{
"docid": "c3152bfcbae60b5b5aaa1c64146538d8",
"text": "BACKGROUND AND PURPOSE\nIn clinical trials and observational studies there is considerable inconsistency in the use of definitions to describe delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage. A major cause for this inconsistency is the combining of radiographic evidence of vasospasm with clinical features of cerebral ischemia, although multiple factors may contribute to DCI. The second issue is the variability and overlap of terms used to describe each phenomenon. This makes comparisons among studies difficult.\n\n\nMETHODS\nAn international ad hoc panel of experts involved in subarachnoid hemorrhage research developed and proposed a definition of DCI to be used as an outcome measure in clinical trials and observational studies. We used a consensus-building approach.\n\n\nRESULTS\nIt is proposed that in observational studies and clinical trials aiming to investigate strategies to prevent DCI, the 2 main outcome measures should be: (1) cerebral infarction identified on CT or MRI or proven at autopsy, after exclusion of procedure-related infarctions; and (2) functional outcome. Secondary outcome measure should be clinical deterioration caused by DCI, after exclusion of other potential causes of clinical deterioration. Vasospasm on angiography or transcranial Doppler can also be used as an outcome measure to investigate proof of concept but should be interpreted in conjunction with DCI or functional outcome.\n\n\nCONCLUSIONS\nThe proposed measures reflect the most relevant morphological and clinical features of DCI without regard to pathogenesis to be used as an outcome measure in clinical trials and observational studies.",
"title": ""
},
{
"docid": "f354fec9ea2fc5d78f105cd1921a5137",
"text": "Network embedding has recently attracted lots of attentions in data mining. Existing network embedding methods mainly focus on networks with pairwise relationships. In real world, however, the relationships among data points could go beyond pairwise, i.e., three or more objects are involved in each relationship represented by a hyperedge, thus forming hyper-networks. These hyper-networks pose great challenges to existing network embedding methods when the hyperedges are indecomposable, that is to say, any subset of nodes in a hyperedge cannot form another hyperedge. These indecomposable hyperedges are especially common in heterogeneous networks. In this paper, we propose a novel Deep Hyper-Network Embedding (DHNE) model to embed hypernetworks with indecomposable hyperedges. More specifically, we theoretically prove that any linear similarity metric in embedding space commonly used in existing methods cannot maintain the indecomposibility property in hypernetworks, and thus propose a new deep model to realize a non-linear tuplewise similarity function while preserving both local and global proximities in the formed embedding space. We conduct extensive experiments on four different types of hyper-networks, including a GPS network, an online social network, a drug network and a semantic network. The empirical results demonstrate that our method can significantly and consistently outperform the state-of-the-art algorithms.",
"title": ""
},
{
"docid": "265d69d874481270c26eb371ca05ac51",
"text": "A compact dual-band dual-polarized antenna is proposed in this paper. The two pair dipoles with strong end coupling are used for the lower frequency band, and cross-placed patch dipoles are used for the upper frequency band. The ends of the dipoles for lower frequency band are bent to increase the coupling between adjacent dipoles, which can benefit the compactness and bandwidth of the antenna. Breaches are introduced at the ends of the dipoles of the upper band, which also benefit the compactness and matching of the antenna. An antenna prototype was fabricated and measured. The measured results show that the antenna can cover from 790 MHz to 960 MHz (19.4%) for lower band and from 1710 MHz to 2170 MHz (23.7%) for upper band with VSWR < 1.5. It is expected to be a good candidate design for base station antennas.",
"title": ""
},
{
"docid": "744d7ce024289df3f32c0d5d3ec6becf",
"text": "Three homeotic mutants, aristapedia (ssa and ssa-UCl) and Nasobemia (Ns) which involve antenna-leg transformations were analyzed with respect to their time of expression. In particular we studied the question of whether these mutations are expressed when the mutant cells pass through additional cell divisions in culture. Mutant antennal discs were cultured in vivo and allowed to duplicate the antennal anlage. Furthermore, regeneration of the mutant antennal anlage was obtained by culturing eye discs and a particular fragment of the eye disc. Both duplicated and regenerated antennae showed at least a partial transformation into leg structures which indicates that the mutant gene is expressed during proliferation in culture.",
"title": ""
},
{
"docid": "ac8cef535e5038231cdad324325eaa37",
"text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.",
"title": ""
},
{
"docid": "2409f9a37398dbff4306930280c76e81",
"text": "OBJECTIVES\nThe dose-response relationship for hand-transmitted vibration has been investigated extensively in temperate environments. Since the clinical features of hand-arm vibration syndrome (HAVS) differ between the temperate and tropical environment, we conducted this study to investigate the dose-response relationship of HAVS in a tropical environment.\n\n\nMETHODS\nA total of 173 male construction, forestry and automobile manufacturing plant workers in Malaysia were recruited into this study between August 2011 and 2012. The participants were interviewed for history of vibration exposure and HAVS symptoms, followed by hand functions evaluation and vibration measurement. Three types of vibration doses-lifetime vibration dose (LVD), total operating time (TOT) and cumulative exposure index (CEI)-were calculated and its log values were regressed against the symptoms of HAVS. The correlation between each vibration exposure dose and the hand function evaluation results was obtained.\n\n\nRESULTS\nThe adjusted prevalence ratio for finger tingling and numbness was 3.34 (95% CI 1.27 to 8.98) for subjects with lnLVD≥20 ln m(2) s(-4) against those <16 ln m(2) s(-4). Similar dose-response pattern was found for CEI but not for TOT. No subject reported white finger. The prevalence of finger coldness did not increase with any of the vibration doses. Vibrotactile perception thresholds correlated moderately with lnLVD and lnCEI.\n\n\nCONCLUSIONS\nThe dose-response relationship of HAVS in a tropical environment is valid for finger tingling and numbness. The LVD and CEI are more useful than TOT when evaluating the dose-response pattern of a heterogeneous group of vibratory tools workers.",
"title": ""
},
{
"docid": "10124ea154b8704c3a6aaec7543ded57",
"text": "Tomato bacterial wilt and canker, caused by Clavibacter michiganensis subsp. michiganensis (Cmm) is considered one of the most important bacterial diseases of tomato worldwide. During the last two decades, severe outbreaks have occurred in greenhouses in the horticultural belt of Buenos Aires-La Plata, Argentina. Cmm strains collected in this area over a period of 14 years (2000–2013) were characterized for genetic diversity by rep-PCR genomic fingerprinting and level of virulence in order to have a better understanding of the source of inoculum and virulence variability. Analyses of BOX-, ERIC- and REP-PCR fingerprints revealed that the strains were genetically diverse; the same three fingerprint types were obtained in all three cases. No relationship could be established between rep-PCR clustering and the year, location or greenhouse origin of isolates, which suggests different sources of inoculum. However, in a few cases, bacteria with identical fingerprint types were isolated from the same greenhouse in different years. Despite strains differing in virulence, particularly within BOX-PCR groups, putative virulence genes located in plasmids (celA, pat-1) or in a pathogenicity island in the chromosome (tomA, chpC, chpG and ppaA) were detected in all strains. Our results suggest that new strains introduced every year via seed importation might be coexisting with others persisting locally. This study highlights the importance of preventive measures to manage tomato bacterial wilt and canker.",
"title": ""
},
{
"docid": "bbfc488e55fe2dfaff2af73a75c31edd",
"text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.",
"title": ""
},
{
"docid": "753e0af8b59c8bfd13b63c3add904ffe",
"text": "Background: Surgery of face and parotid gland may cause injury to branches of the facial nerve, which results in paralysis of muscles of facial expression. Knowledge of branching patterns of the facial nerve and reliable landmarks of the surrounding structures are essential to avoid this complication. Objective: Determine the facial nerve branching patterns, the course of the marginal mandibular branch (MMB), and the extraparotid ramification in relation to the lateral palpebral line (LPL). Materials and methods: One hundred cadaveric half-heads were dissected for determining the facial nerve branching patterns according to the presence of anastomosis between branches. The course of the MMB was followed until it entered the depressor anguli oris in 49 specimens. The vertical distance from the mandibular angle to this branch was measured. The horizontal distance from the LPL to the otobasion superious (LPL-OBS) and the apex of the parotid gland (LPL-AP) were measured in 52 specimens. Results: The branching patterns of the facial nerve were categorized into six types. The least common (1%) was type I (absent of anastomosis), while type V, the complex pattern was the most common (29%). Symmetrical branching pattern occurred in 30% of cases. The MMB was coursing below the lower border of the mandible in 57% of cases. The mean vertical distance was 0.91±0.22 cm. The mean horizontal distances of LPL-OBS and LPLAP were 7.24±0.6 cm and 3.95±0.96 cm, respectively. The LPL-AP length was 54.5±11.4% of LPL-OBS. Conclusion: More complex branching pattern of the facial nerve was found in this population and symmetrical branching pattern occurred less of ten. The MMB coursed below the lower border of the angle of mandible with a mean vertical distance of one centimeter. The extraparotid ramification of the facial nerve was located in the area between the apex of the parotid gland and the LPL.",
"title": ""
},
{
"docid": "9c18c6c79c8588e587dc1061eae7fa21",
"text": "BACKGROUND\nThe safety and tolerability of the selective serotonin reuptake inhibitors and the newer atypical agents have led to a significant increase in antidepressant use. These changes raise concern as to the likelihood of a corresponding increase in adverse behavioral reactions attributable to these drugs.\n\n\nMETHOD\nAll admissions to a university-based general hospital psychiatric unit during a 14-month period were reviewed.\n\n\nRESULTS\nForty-three (8.1%) of 533 patients were found to have been admitted owing to antidepressant-associated mania or psychosis.\n\n\nCONCLUSION\nDespite the positive changes in the side effect profile of antidepressant drugs, the rate of admissions due to antidepressant-associated adverse behavioral effects remains significant.",
"title": ""
},
{
"docid": "dbb4540af2166d4292253b17ce1ff68f",
"text": "On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.",
"title": ""
},
{
"docid": "773a46b340c1d98012c8c00c72308359",
"text": "The complexity of many image processing applications and their stringent performance requirements have come to a point where they can no longer meet the real time deadlines, if implemented on conventional architectures based on a single general-purpose processor. Acceleration of these algorithms can be done by parallel computing. Parallelism can be accomplished both at hardware and software levels by various tools and methodologies. The various methods hence discussed prove to be helpful and thus a combination of both the custom hardware and software tool helps in speeding up the image processing algorithm. Different methodologies that can be used for parallel computation are discussed.",
"title": ""
},
{
"docid": "8b4285fa5b46b2eb58a06e5f5ba46b1e",
"text": "Many firms develop an information technology strategy that includes the use of business intelligence software in the decision making process. In order to really achieve a solid return on investment on this type of software, the firm should have at least 10 years of detailed data on sales, purchases, staff costs, and other items that impact the overall cost of providing a service or good. Data cubes and reports can then be built to show trends, identify product success and failures, and provide a more holistic view of company activity. This paper describes such software “Business Intelligence System for Banking and Finance”.",
"title": ""
},
{
"docid": "ee38062c7c479cfc9d8e9fc0982a9ae3",
"text": "Integrating data from heterogeneous sources is often modeled as merging graphs. Given two ormore “compatible”, but not-isomorphic graphs, the first step is to identify a graph alignment, where a potentially partial mapping of vertices between two graphs is computed. A significant portion of the literature on this problem only takes the global structure of the input graphs into account. Only more recent ones additionally use vertex and edge attributes to achieve a more accurate alignment. However, these methods are not designed to scale to map large graphs arising in many modern applications. We propose a new iterative graph aligner, gsaNA, that uses the global structure of the graphs to significantly reduce the problem size and align large graphs with a minimal loss of information. Concretely, we show that our proposed technique is highly flexible, can be used to achieve higher recall, and it is orders of magnitudes faster than the current state of the art techniques. ACM Reference format: Abdurrahman Yaşar and Ümit V. Çatalyürek. 2018. An Iterative Global Structure-Assisted Labeled Network Aligner. In Proceedings of Special Interest Group on Knowledge Discovery and Data Mining, London, England, August 18 (SIGKDD’18), 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn",
"title": ""
}
] |
scidocsrr
|
6c4862cfa183d0dbb0e5ae84cd089947
|
On the Unfairness of Blockchain
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
}
] |
[
{
"docid": "5e58638e766904eb84380b53cae60df2",
"text": "BACKGROUND\nAneurysmal subarachnoid hemorrhage (SAH) accounts for 5% of strokes and carries a poor prognosis. It affects around 6 cases per 100,000 patient years occurring at a relatively young age.\n\n\nMETHODS\nCommon risk factors are the same as for stroke, and only in a minority of the cases, genetic factors can be found. The overall mortality ranges from 32% to 67%, with 10-20% of patients with long-term dependence due to brain damage. An explosive headache is the most common reported symptom, although a wide spectrum of clinical disturbances can be the presenting symptoms. Brain computed tomography (CT) allow the diagnosis of SAH. The subsequent CT angiography (CTA) or digital subtraction angiography (DSA) can detect vascular malformations such as aneurysms. Non-aneurysmal SAH is observed in 10% of the cases. In patients surviving the initial aneurysmal bleeding, re-hemorrhage and acute hydrocephalus can affect the prognosis.\n\n\nRESULTS\nAlthough occlusion of an aneurysm by surgical clipping or endovascular procedure effectively prevents rebleeding, cerebral vasospasm and the resulting cerebral ischemia occurring after SAH are still responsible for the considerable morbidity and mortality related to such a pathology. A significant amount of experimental and clinical research has been conducted to find ways in preventing these complications without sound results.\n\n\nCONCLUSIONS\nEven though no single pharmacological agent or treatment protocol has been identified, the main therapeutic interventions remain ineffective and limited to the manipulation of systemic blood pressure, alteration of blood volume or viscosity, and control of arterial dioxide tension.",
"title": ""
},
{
"docid": "af63f1e1efbb15f2f41a91deb6ec1e32",
"text": "OBJECTIVES\n: A systematic review of the literature to determine the ability of dynamic changes in arterial waveform-derived variables to predict fluid responsiveness and compare these with static indices of fluid responsiveness. The assessment of a patient's intravascular volume is one of the most difficult tasks in critical care medicine. Conventional static hemodynamic variables have proven unreliable as predictors of volume responsiveness. Dynamic changes in systolic pressure, pulse pressure, and stroke volume in patients undergoing mechanical ventilation have emerged as useful techniques to assess volume responsiveness.\n\n\nDATA SOURCES\n: MEDLINE, EMBASE, Cochrane Register of Controlled Trials and citation review of relevant primary and review articles.\n\n\nSTUDY SELECTION\n: Clinical studies that evaluated the association between stroke volume variation, pulse pressure variation, and/or stroke volume variation and the change in stroke volume/cardiac index after a fluid or positive end-expiratory pressure challenge.\n\n\nDATA EXTRACTION AND SYNTHESIS\n: Data were abstracted on study design, study size, study setting, patient population, and the correlation coefficient and/or receiver operating characteristic between the baseline systolic pressure variation, stroke volume variation, and/or pulse pressure variation and the change in stroke index/cardiac index after a fluid challenge. When reported, the receiver operating characteristic of the central venous pressure, global end-diastolic volume index, and left ventricular end-diastolic area index were also recorded. Meta-analytic techniques were used to summarize the data. Twenty-nine studies (which enrolled 685 patients) met our inclusion criteria. Overall, 56% of patients responded to a fluid challenge. The pooled correlation coefficients between the baseline pulse pressure variation, stroke volume variation, systolic pressure variation, and the change in stroke/cardiac index were 0.78, 0.72, and 0.72, respectively. The area under the receiver operating characteristic curves were 0.94, 0.84, and 0.86, respectively, compared with 0.55 for the central venous pressure, 0.56 for the global end-diastolic volume index, and 0.64 for the left ventricular end-diastolic area index. The mean threshold values were 12.5 +/- 1.6% for the pulse pressure variation and 11.6 +/- 1.9% for the stroke volume variation. The sensitivity, specificity, and diagnostic odds ratio were 0.89, 0.88, and 59.86 for the pulse pressure variation and 0.82, 0.86, and 27.34 for the stroke volume variation, respectively.\n\n\nCONCLUSIONS\n: Dynamic changes of arterial waveform-derived variables during mechanical ventilation are highly accurate in predicting volume responsiveness in critically ill patients with an accuracy greater than that of traditional static indices of volume responsiveness. This technique, however, is limited to patients who receive controlled ventilation and who are not breathing spontaneously.",
"title": ""
},
{
"docid": "37e561a8dd29299dee5de2cb7781c5a3",
"text": "The management of knowledge and experience are key means by which systematic software development and process improvement occur. Within the domain of software engineering (SE), quality continues to remain an issue of concern. Although remedies such as fourth generation programming languages, structured techniques and object-oriented technology have been promoted, a \"silver bullet\" has yet to be found. Knowledge management (KM) gives organisations the opportunity to appreciate the challenges and complexities inherent in software development. We report on two case studies that investigate KM in SE at two IT organisations. Structured interviews were conducted, with the assistance of a qualitative questionnaire. The results were used to describe current practices for KM in SE, to investigate the nature of KM activities in these organisations, and to explain the impact of leadership, technology, culture and measurement as enablers of the KM process for SE.",
"title": ""
},
{
"docid": "a1fef597312118f53e6b1468084a9300",
"text": "The design of highly emissive and stable blue emitters for organic light emitting diodes (OLEDs) is still a challenge, justifying the intense research activity of the scientific community in this field. Recently, a great deal of interest has been devoted to the elaboration of emitters exhibiting a thermally activated delayed fluorescence (TADF). By a specific molecular design consisting into a minimal overlap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) due to a spatial separation of the electron-donating and the electron-releasing parts, luminescent materials exhibiting small S1-T1 energy splitting could be obtained, enabling to thermally upconvert the electrons from the triplet to the singlet excited states by reverse intersystem crossing (RISC). By harvesting both singlet and triplet excitons for light emission, OLEDs competing and sometimes overcoming the performance of phosphorescence-based OLEDs could be fabricated, justifying the interest for this new family of materials massively popularized by Chihaya Adachi since 2012. In this review, we proposed to focus on the recent advances in the molecular design of blue TADF emitters for OLEDs during the last few years.",
"title": ""
},
{
"docid": "979b0feaadefcf8494af4667cfe9a1ff",
"text": "We study fairness within the stochastic,multi-armed bandit (MAB) decision making framework. We adapt the fairness framework of “treating similar individuals similarly” [5] to this seing. Here, an ‘individual’ corresponds to an arm and two arms are ‘similar’ if they have a similar quality distribution. First, we adopt a smoothness constraint that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we dene the fairness regret, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on ompson sampling satises smooth fairness for total variation distance, and give an Õ((kT )2/3) bound on fairness regret. is complements prior work [12], which protects an on-average beer arm from being less favored. We also explain how to extend our algorithm to the dueling bandit seing. ACM Reference format: Yang Liu, Goran Radanovic, Christos Dimitrakakis, DebmalyaMandal, andDavid C. Parkes. 2017. Calibrated Fairness in Bandits. In Proceedings of FAT-ML, Calibrated Fairness in Bandits, September 2017 (FAT-ML17), 7 pages. DOI: 10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "e83622a6c195b63f9a20306af8aade18",
"text": "BACKGROUND\nPelvic floor muscle training is the most commonly recommended physical therapy treatment for women with stress leakage of urine. It is also used in the treatment of women with mixed incontinence, and less commonly for urge incontinence. Adjuncts, such as biofeedback or electrical stimulation, are also commonly used with pelvic floor muscle training. The content of pelvic floor muscle training programmes is highly variable.\n\n\nOBJECTIVES\nTo determine the effects of pelvic floor muscle training for women with symptoms or urodynamic diagnoses of stress, urge and mixed incontinence, in comparison to no treatment or other treatment options.\n\n\nSEARCH STRATEGY\nSearch strategy: We searched the Cochrane Incontinence Group trials register (May 2000), Medline (1980 to 1998), Embase (1980 to 1998), the database of the Dutch National Institute of Allied Health Professions (to 1998), the database of the Cochrane Rehabilitation and Related Therapies Field (to 1998), Physiotherapy Index (to 1998) and the reference lists of relevant articles. We handsearched the proceedings of the International Continence Society (1980 to 2000). We contacted investigators in the field to locate studies. Date of the most recent searches: May 2000.\n\n\nSELECTION CRITERIA\nRandomised trials in women with symptoms or urodynamic diagnoses of stress, urge or mixed incontinence that included pelvic floor muscle training in at least one arm of the trial.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers assessed all trials for inclusion/exclusion and methodological quality. Data were extracted by the lead reviewer onto a standard form and cross checked by another. Disagreements were resolved by discussion. Data were processed as described in the Cochrane Handbook. Sensitivity analysis on the basis of diagnosis was planned and undertaken where appropriate.\n\n\nMAIN RESULTS\nForty-three trials met the inclusion criteria. The primary or only reference for 15 of these was a conference abstract. The pelvic floor muscle training programs, and comparison interventions, varied markedly. Outcome measures differed between trials, and methods of data reporting varied, making the data difficult to combine. Many of the trials were small. Allocation concealment was adequate in five trials, and nine trials used assessors masked to group allocation. Thirteen trials reported that there were no losses to follow up, seven trials had dropout rates of less than 10%, but in the remaining trials the proportion of dropouts ranged from 12% to 41%. Pelvic floor muscle training was better than no treatment or placebo treatments for women with stress or mixed incontinence. 'Intensive' appeared to be better than 'standard' pelvic floor muscle training. PFMT may be more effective than some types of electrical stimulation but there were problems in combining the data from these trials. There is insufficient evidence to determine if pelvic floor muscle training is better or worse than other treatments. The effect of adding pelvic floor muscle training to other treatments (e.g. electrical stimulation, behavioural training) is not clear due to the limited amount of evidence available. Evidence of the effect of adding other adjunctive treatments to PFMT (e.g. vaginal cones, intravaginal resistance) is equally limited. The effectiveness of biofeedback assisted PFMT is not clear, but on the basis of the evidence available there did not appear to be any benefit over PFMT alone at post treatment assessment. Long-term outcomes of pelvic floor muscle training are unclear. Side effects of pelvic floor muscle training were uncommon and reversible. A number of the formal comparisons should be viewed with caution due to statistical heterogeneity, lack of statistical independence, and the possibility of spurious confidence intervals in some instances.\n\n\nREVIEWER'S CONCLUSIONS\nPelvic floor muscle training appeared to be an effective treatment for adult women with stress or mixed incontinence. Pelvic floor muscle training was better than no treatment or placebo treatments. The limitations of the evidence available mean that is difficult to judge if pelvic floor muscle training was better or worse than other treatments. Most trials to date have studied the effect of treatment in younger, premenopausal women. The role of pelvic floor muscle training for women with urge incontinence alone remains unclear. Many of the trials were small with poor reporting of allocation concealment and masking of outcome assessors. In addition there was a lack of consistency in the choice and reporting of outcome measures that made data difficult to combine. Methodological problems limit the confidence that can be placed in the findings of the review. Further, large, high quality trials are necessary.",
"title": ""
},
{
"docid": "3202cd03c9af446bd6bc2ca0b384c2ac",
"text": "ABSTRACT\nSurgical correction for nonsyndromic craniosynostosis has continued to evolve over the last century. The criterion standard has remained open correction of the cranial deformities, and many techniques have been described that yield satisfactory results. However, technology has allowed for minimally invasive techniques to be developed with the aid of endoscopic visualization. With proper patient selection and the aid of postoperative helmet therapy, there is increasing evidence that supports these techniques' safety and efficacy. In this article, our purpose was to describe our algorithm for treating nonsyndromic craniosynostosis at Rady Children's Hospital.",
"title": ""
},
{
"docid": "0dac38edf20c2a89a9eb46cd1300162c",
"text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.",
"title": ""
},
{
"docid": "cf4089c8c3b8408e2d2966e3abd8af09",
"text": "The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-ofthe-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91eac59a625914805a22643c6fe79ad1",
"text": "Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes.",
"title": ""
},
{
"docid": "60c8a335245e28f2a9ac24edd73eee5a",
"text": "Papulopustular rosacea (PPR) is a common facial skin disease, characterized by erythema, telangiectasia, papules and pustules. Its physiopathology is still being discussed, but recently several molecular features of its inflammatory process have been identified: an overproduction of Toll-Like receptors 2, of a serine protease, and of abnormal forms of cathelicidin. The two factors which stimulate the Toll-like receptors to induce cathelicidin expression are skin infection and cutaneous barrier disruption: these two conditions are, at least theoretically, fulfilled by Demodex, which is present in high density in PPR and creates epithelial breaches by eating cells. So, the major pathogenic mechanisms of Demodex and its role in PPR are reviewed here in the context of these recent discoveries. In this review, the inflammatory process of PPR appears to be a consequence of the proliferation of Demodex, and strongly supports the hypothesis that: (1) in the first stage a specific (innate or acquired) immune defect against Demodex allows the proliferation of the mite; (2) in the second stage, probably when some mites penetrate into the dermis, the immune system is suddenly stimulated and gives rise to an exaggerated immune response against the Demodex, resulting in the papules and the pustules of the rosacea. In this context, it would be very interesting to study the immune molecular features of this first stage, named \"pityriasis folliculorum\", where the Demodex proliferate profusely with no, or a low immune reaction from the host: this entity appears to be a missing link in the understanding of rosacea.",
"title": ""
},
{
"docid": "06b43bbf61791a76c3455cb4d591d71e",
"text": "We present a feature-based framework that combines spatial feature clustering, guided sampling for pose generation, and model updating for 3D object recognition and pose estimation. Existing methods fails in case of repeated patterns or multiple instances of the same object, as they rely only on feature discriminability for matching and on the estimator capabilities for outlier rejection. We propose to spatially separate the features before matching to create smaller clusters containing the object. Then, hypothesis generation is guided by exploiting cues collected offand on-line, such as feature repeatability, 3D geometric constraints, and feature occurrence frequency. Finally, while previous methods overload the model with synthetic features for wide baseline matching, we claim that continuously updating the model representation is a lighter yet reliable strategy. The evaluation of our algorithm on challenging video sequences shows the improvement provided by our contribution.",
"title": ""
},
{
"docid": "d488d9d754c360efb3910c83e3175756",
"text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.",
"title": ""
},
{
"docid": "20c2aea79b80c93783aa3f82a8aa2625",
"text": "The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.",
"title": ""
},
{
"docid": "009543f9b54e116f379c95fe255e7e03",
"text": "With technology migration into nano and molecular scales several hybrid CMOS/nano logic and memory architectures have been proposed that aim to achieve high device density with low power consumption. The discovery of the memristor has further enabled the realization of denser nanoscale logic and memory systems by facilitating the implementation of multilevel logic. This work describes the design of such a multilevel nonvolatile memristor memory system, and the design constraints imposed in the realization of such a memory. In particular, the limitations on load, bank size, number of bits achievable per device, placed by the required noise margin for accurately reading and writing the data stored in a device are analyzed. Also analyzed are the nondisruptive read and write methodologies for the hybrid multilevel memristor memory to program and read the memristive information without corrupting it. This work showcases two write methodologies that leverage the best traits of memristors when used in either linear (low power) or nonlinear drift (fast speeds) modes. The system can therefore be tailored depending on the required performance parameters of a given application for a fast memory or a slower but very energy-efficient system. We propose for the first time, a hybrid memory that aims to incorporate the area advantage provided by the utilization of multilevel logic and nanoscale memristive devices in conjunction with CMOS for the realization of a high density nonvolatile multilevel memory.",
"title": ""
},
{
"docid": "3436b24142bfce01eadd6f7a1d6f1dd1",
"text": "Partial discharge (PD) detection has been widely applied to high voltage cable systems for several decades. In this paper, three kinds of insulation defects in XLPE cables are designed and tested at step-wise DC voltage. The PD developing progress of each defect cable is divided into two stages based on the severity degree of PDs. Based on the compressed sensing (CS) theory, a novel method used for recognizing PD patterns at DC voltage is proposed. Firstly, both the statistical features of PD repetition rate and the norm characteristics of time domain features are extracted to create a high-dimensional feature space. Then each test sample from the feature space is sparsely represented as linear combinations of training samples, and the sufficiently sparse one is obtained via 1-norm minimization. Finally, the PD pattern can be recognized by minimizing the residuals between the test sample and the recovered one. The experimental data is analyzed by the proposed method, and the results show that the patterns of both PD source and PD stage are recognized precisely, when the combination solution of features and the 1-norm minimization algorithm are determined appropriately.",
"title": ""
},
{
"docid": "7e720290d507c3370fc50782df3e90c4",
"text": "Photobacterium damselae subsp. piscicida is the causative agent of pasteurellosis in wild and farmed marine fish worldwide. Although serologically homogeneous, recent molecular advances have led to the discovery of distinct genetic clades, depending on geographical origin. Further details of the strategies for host colonisation have arisen including information on the role of capsule, susceptibility to oxidative stress, confirmation of intracellular survival in host epithelial cells, and induced apoptosis of host macrophages. This improved understanding has given rise to new ideas and advances in vaccine technologies, which are reviewed in this paper.",
"title": ""
},
{
"docid": "241a1589619c2db686675327cab1e8da",
"text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.",
"title": ""
},
{
"docid": "02f97b35b014a55b4a36e22981877784",
"text": "BACKGROUND\nCough is an extremely common problem in pediatrics, mostly triggered and perpetuated by inflammatory processes or mechanical irritation leading to viscous mucous production and increased sensitivity of the cough receptors. Protecting the mucosa might be very useful in limiting the contact with micro-organisms and irritants thus decreasing the inflammation and mucus production. Natural molecular complexes can act as a mechanical barrier limiting cough stimuli with a non pharmacological approach but with an indirect anti-inflammatory action.\n\n\nOBJECTIVE\nAim of the study was to assess the efficacy of a medical device containing natural functional components in the treatment of cough persisting more than 7 days.\n\n\nMETHODS\nIn this randomized, parallel groups, double-blind vs. placebo study, children with cough persisting more than 7 days were enrolled. The clinical efficacy of the study product was assessed evaluating changes in day- and night-time cough scores after 4 and 8 days (t4 and t8) of product administration.\n\n\nRESULTS\nIn the inter-group analysis, in the study product group compared with the placebo group, a significant difference (t4 study treatment vs. t4 placebo, p = 0.03) was observed at t4 in night-time cough score.Considering the intra-group analysis, only the study product group registered a significant improvement from t0 to t4 in both day-time (t0 vs. t4, p = 0.04) and night-time (t0 vs. t4, p = 0.003) cough scores.A significant difference, considering the study product, was also found in the following intra-group analyses: day-time scores at t4 vs. t8 (p =0.01) and at t0 vs. t8 (p = 0.001); night-time scores at t4 vs. t8 (p = 0.05), and at t0 vs. t8 (p = 0.005). Considering a subgroup of patients with higher cough (≥ 3) scores, 92.9% of them in the study product group improved at t0 vs. t4 day-time.\n\n\nCONCLUSIONS\nGrintuss® pediatric syrup showed to possess an interesting profile of efficacy and safety in the treatment of cough persisting more than 7 days.",
"title": ""
},
{
"docid": "06113aca54d87ade86127f2844df6bfd",
"text": "A growing number of people use social networking sites to foster social relationships among each other. While the advantages of the provided services are obvious, drawbacks on a users' privacy and arising implications are often neglected. In this paper we introduce a novel attack called automated social engineering which illustrates how social networking sites can be used for social engineering. Our approach takes classical social engineering one step further by automating tasks which formerly were very time-intensive. In order to evaluate our proposed attack cycle and our prototypical implementation (ASE bot), we conducted two experiments. Within the first experiment we examine the information gathering capabilities of our bot. The second evaluation of our prototype performs a Turing test. The promising results of the evaluation highlight the possibility to efficiently and effectively perform social engineering attacks by applying automated social engineering bots.",
"title": ""
}
] |
scidocsrr
|
39d0cf3b8a14d45ab3abdf72f558ee55
|
Social Network De-anonymization with Overlapping Communities: Analysis, Algorithm and Experiments
|
[
{
"docid": "0bf5a87d971ff2dca4c8dfa176316663",
"text": "A crucial privacy-driven issue nowadays is re-identifying anonymized social networks by mapping them to correlated cross-domain auxiliary networks. Prior works are typically based on modeling social networks as random graphs representing users and their relations, and subsequently quantify the quality of mappings through cost functions that are proposed without sufficient rationale. Also, it remains unknown how to algorithmically meet the demand of such quantifications, i.e., to find the minimizer of the cost functions. We address those concerns in a more realistic social network modeling parameterized by community structures that can be leveraged as side information for de-anonymization. By Maximum A Posteriori (MAP) estimation, our first contribution is new and well justified cost functions, which, when minimized, enjoy superiority to previous ones in finding the correct mapping with the highest probability. The feasibility of the cost functions is then for the first time algorithmically characterized. While proving the general multiplicative inapproximability, we are able to propose two algorithms, which, respectively, enjoy an -additive approximation and a conditional optimality in carrying out successful user re-identification. Our theoretical findings are empirically validated, with a notable dataset extracted from rare true cross-domain networks that reproduce genuine social network de-anonymization. Both theoretical and empirical observations also manifest the importance of community information in enhancing privacy inferencing.",
"title": ""
}
] |
[
{
"docid": "afe26c28b56a511452096bfc211aed97",
"text": "System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.",
"title": ""
},
{
"docid": "59c83aa2f97662c168316f1a4525fd4d",
"text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.",
"title": ""
},
{
"docid": "f577f970f841d8dee34e524ba661e727",
"text": "The rapid growth in the amount of user-generated content (UGCs) online necessitates for social media companies to automatically extract knowledge structures (concepts) from user-generated images (UGIs) and user-generated videos (UGVs) to provide diverse multimedia-related services. For instance, recommending preference-aware multimedia content, the understanding of semantics and sentics from UGCs, and automatically computing tag relevance for UGIs are benefited from knowledge structures extracted from multiple modalities. Since contextual information captured by modern devices in conjunction with a media item greatly helps in its understanding, we leverage both multimedia content and contextual information (eg., spatial and temporal metadata) to address above-mentioned social media problems in our doctoral research. We present our approaches, results, and works in progress on these problems.",
"title": ""
},
{
"docid": "2390d3d6c51c4a6857c517eb2c2cb3c0",
"text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.",
"title": ""
},
{
"docid": "75a15ef2ce8dd6b4c58a36b9fd352d18",
"text": "Business growth and technology advancements have resulted in growing amounts of enterprise data. To gain valuable business insight and competitive advantage, businesses demand the capability of performing real-time analytics on such data. This, however, involves expensive query operations that are very time consuming on traditional CPUs. Additionally, in traditional database management systems (DBMS), the CPU resources are dedicated to mission-critical transactional workloads. Offloading expensive analytics query operations to a co-processor can allow efficient execution of analytics workloads in parallel with transactional workloads.\n In this paper, we present a Field Programmable Gate Array (FPGA) based acceleration engine for database operations in analytics queries. The proposed solution provides a mechanism for a DBMS to seamlessly harness the FPGA compute power without requiring any changes in the application or the existing data layout. Using a software-programmed query control block, the accelerator can be tailored to execute different queries without reconfiguration. Our prototype is implemented in a PCIe-attached FPGA system and is integrated into a commercial DBMS platform. The results demonstrate up to 94% CPU savings on real customer data compared to the baseline software cost with up to an order of magnitude speedup in the offloaded computations and up to 6.2x improvement in end-to-end performance.",
"title": ""
},
{
"docid": "a2a0ff72b88d766ab5eb087c14d88b03",
"text": "Next-generation non-volatile memory (NVM) technologies, such as phase-change memory and memristors, can enable computer systems infrastructure to continue keeping up with the voracious appetite of data-centric applications for large, cheap, and fast storage. Persistent memory has emerged as a promising approach to accessing emerging byte-addressable non-volatile memory through processor load/store instructions. Due to lack of commercially available NVM, system software researchers have mainly relied on emulation to model persistent memory performance. However, existing emulation approaches are either too simplistic, or too slow to emulate large-scale workloads, or require special hardware. To fill this gap and encourage wider adoption of persistent memory, we developed a performance emulator for persistent memory, called Quartz. Quartz enables an efficient emulation of a wide range of NVM latencies and bandwidth characteristics for performance evaluation of emerging byte-addressable NVMs and their impact on applications performance (without modifying or instrumenting their source code) by leveraging features available in commodity hardware. Our emulator is implemented on three latest Intel Xeon-based processor architectures: Sandy Bridge, Ivy Bridge, and Haswell. To assist researchers and engineers in evaluating design decisions with emerging NVMs, we extend Quartz for emulating the application execution on future systems with two types of memory: fast, regular volatile DRAM and slower persistent memory. We evaluate the effectiveness of our approach by using a set of specially designed memory-intensive benchmarks and real applications. The accuracy of the proposed approach is validated by running these programs both on our emulation platform and a multisocket (NUMA) machine that can support a range of memory latencies. We show that Quartz can emulate a range of performance characteristics with low overhead and good accuracy (with emulation errors 0.2% - 9%).",
"title": ""
},
{
"docid": "c2816721fa6ccb0d676f7fdce3b880d4",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "313761d2cdb224253f87fe4b33977b85",
"text": "In this paper we described an authorship attribution system for Bengali blog texts. We have presented a new Bengali blog corpus of 3000 passages written by three authors. Our study proposes a text classification system, based on lexical features such as character bigrams and trigrams, word n-grams (n = 1, 2, 3) and stop words, using four classifiers. We achieve best results (more than 99%) on the held-out dataset using Multi layered Perceptrons (MLP) amongst the four classifiers, which indicates MLP can produce very good results for big data sets and lexical n-gram based features can be the best features for any authorship attribution system.",
"title": ""
},
{
"docid": "636cb349f6a8dcdde70ee39b663dbdbe",
"text": "Estimation and modelling problems as they arise in many data analysis areas often turn out to be unstable and/or intractable by standard numerical methods. Such problems frequently occur in fitting of large data sets to a certain model and in predictive learning. Heuristics are general recommendations based on practical statistical evidence, in contrast to a fixed set of rules that cannot vary, although guarantee to give the correct answer. Although the use of these methods became more standard in several fields of sciences, their use for estimation and modelling in statistics appears to be still limited. This paper surveys a set of problem-solving strategies, guided by heuristic information, that are expected to be used more frequently. The use of recent advances in different fields of large-scale data analysis is promoted focusing on applications in medicine, biology and technology.",
"title": ""
},
{
"docid": "187c696aeb78607327fd817dfa9446ba",
"text": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups.",
"title": ""
},
{
"docid": "a23949a678e49a7e1495d98aae3adef2",
"text": "The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.",
"title": ""
},
{
"docid": "176dfaa0457b06aee41014ad0f895c13",
"text": "The generalized feedback shift register pseudorandom number algorithm has several advantages over all other pseudorandom number generators. These advantages are: (1) it produces multidimensional pseudorandom numbers; (2) it has an arbitrarily long period independent of the word size of the computer on which it is implemented; (3) it is faster than other pseudorandom number generators; (4) the “same” floating-point pseudorandom number sequence is obtained on any machine, that is, the high order mantissa bits of each pseudorandom number agree on all machines— examples are given for IBM 360, Sperry-Rand-Univac 1108, Control Data 6000, and Hewlett-Packard 2100 series computers; (5) it can be coded in compiler languages (it is portable); (6) the algorithm is easily implemented in microcode and has been programmed for an Interdata computer.",
"title": ""
},
{
"docid": "99bd8339f260784fff3d0a94eb04f6f4",
"text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.",
"title": ""
},
{
"docid": "2ef92113a901df268261be56f5110cfa",
"text": "This paper studies the problem of finding a priori shortest paths to guarantee a given likelihood of arriving on-time in a stochastic network. Such ‘‘reliable” paths help travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. Optimal solutions to the problem can be obtained from local-reliable paths, which are a set of non-dominated paths under first-order stochastic dominance. We show that Bellman’s principle of optimality can be applied to construct local-reliable paths. Acyclicity of local-reliable paths is established and used for proving finite convergence of solution procedures. The connection between the a priori path problem and the corresponding adaptive routing problem is also revealed. A label-correcting algorithm is proposed and its complexity is analyzed. A pseudo-polynomial approximation is proposed based on extreme-dominance. An extension that allows travel time distribution functions to vary over time is also discussed. We show that the time-dependent problem is decomposable with respect to arrival times and therefore can be solved as easily as its static counterpart. Numerical results are provided using typical transportation networks. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0b4c57b93a0da45e6561a0a13a4e4005",
"text": "Scientific article recommendation problem deals with recommending similar scientific articles given a query article. It can be categorized as a content based similarity system. Recent advancements in representation learning methods have proven to be effective in modeling distributed representations in different modalities like images, languages, speech, networks etc. The distributed representations obtained using such techniques in turn can be used to calculate similarities. In this paper, we address the problem of scientific paper recommendation through a novel method which aims to combine multimodal distributed representations, which in this case are: 1. distributed representations of paper’s content, and 2. distributed representation of the graph constructed from the bibliographic network. Through experiments we demonstrate that our method outperforms the state-of-the-art distributed representation methods in text and graph, by 29.6% and 20.4%, both in terms of precision and mean-average-precision respectively.",
"title": ""
},
{
"docid": "461062a51b0c33fcbb0f47529f3a6fba",
"text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.",
"title": ""
},
{
"docid": "8721382dd1674fac3194d015b9c64f94",
"text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves",
"title": ""
},
{
"docid": "7363b433f17e1f3dfecc805b58a8706b",
"text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.",
"title": ""
},
{
"docid": "93dd0ad4eb100d4124452e2f6626371d",
"text": "The role of background music in audience responses to commercials (and other marketing elements) has received increasing attention in recent years. This article extends the discussion of music’s influence in two ways: (1) by using music theory to analyze and investigate the effects of music’s structural profiles on consumers’ moods and emotions and (2) by examining the relationship between music’s evoked moods that are congruent versus incongruent with the purchase occasion and the resulting effect on purchase intentions. The study reported provides empirical support for the notion that when music is used to evoke emotions congruent with the symbolic meaning of product purchase, the likelihood of purchasing is enhanced. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
38dbf7dd1f5690bc2d8cb2b98a2cdabf
|
Formal Verification of Neural Network Controlled Autonomous Systems
|
[
{
"docid": "711b8ac941db1e6e1eef093ca340717b",
"text": "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety critical domains. However, traditional software testing methodology, including test coverage criteria and test case generation algorithms, cannot be applied directly to DNNs. This paper bridges this gap. First, inspired by the traditional MC/DC coverage criterion, we propose a set of four test criteria that are tailored to the distinct features of DNNs. Our novel criteria are incomparable and complement each other. Second, for each criterion, we give an algorithm for generating test cases based on linear programming (LP). The algorithms produce a new test case (i.e., an input to the DNN) by perturbing a given one. They encode the test requirement and a fragment of the DNN by fixing the activation pattern obtained from the given input example, and then minimize the difference between the new and the current inputs. Finally, we validate our method on a set of networks trained on the MNIST dataset. The utility of our method is shown experimentally with four objectives: (1) bug finding; (2) DNN safety statistics; (3) testing efficiency and (4) DNN internal structure analysis.",
"title": ""
},
{
"docid": "c85ee4139239b17d98b0d77836e00b72",
"text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.",
"title": ""
},
{
"docid": "50d6f6a65099ce0ffb804f15a9adcaa1",
"text": "Machine Learning (ML) algorithms are now used in a wide range of application domains in society. Naturally, software implementations of these algorithms have become ubiquitous. Faults in ML software can cause substantial losses in these application domains. Thus, it is very critical to conduct effective testing of ML software to detect and eliminate its faults. However, testing ML software is difficult, partly because producing test oracles used for checking behavior correctness (such as using expected properties or expected test outputs) is challenging. In this paper, we propose an approach of multiple-implementation testing to test supervised learning software, a major type of ML software. In particular, our approach derives a test input’s proxy oracle from the majority-voted output running the test input of multiple implementations of the same algorithm (based on a pre-defined percentage threshold). Our approach reports likely those test inputs whose outputs (produced by an implementation under test) are different from the majority-voted outputs as failing tests. We evaluate our approach on two highly popular supervised learning algorithms: k-Nearest Neighbor (kNN) and Naive Bayes (NB). Our results show that our approach is highly effective in detecting faults in real-world supervised learning software. In particular, our approach detects 13 real faults and 1 potential fault from 19 kNN implementations and 16 real faults from 7 NB implementations. Our approach can even detect 7 real faults and 1 potential fault among the three popularly used open-source ML projects (Weka, RapidMiner,",
"title": ""
}
] |
[
{
"docid": "3ba10a680a5204b8242203e053fc3379",
"text": "Recommender system has been more and more popular and widely used in many applications recently. The increasing information available, not only in quantities but also in types, leads to a big challenge for recommender system that how to leverage these rich information to get a better performance. Most traditional approaches try to design a specific model for each scenario, which demands great efforts in developing and modifying models. In this technical report, we describe our implementation of feature-based matrix factorization. This model is an abstract of many variants of matrix factorization models, and new types of information can be utilized by simply defining new features, without modifying any lines of code. Using the toolkit, we built the best single model reported on track 1 of KDDCup’11.",
"title": ""
},
{
"docid": "1a8e9b74d4c1a32299ca08e69078c70c",
"text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.",
"title": ""
},
{
"docid": "5878d3cdbf74928fa002ab21cc62612f",
"text": "We focus on the multi-label categorization task for short texts and explore the use of a hierarchical structure (HS) of categories. In contrast to the existing work using non-hierarchical flat model, the method leverages the hierarchical relations between the categories to tackle the data sparsity problem. The lower the HS level, the worse the categorization performance. Because lower categories are fine-grained and the amount of training data per category is much smaller than that in an upper level. We propose an approach which can effectively utilize the data in the upper levels to contribute categorization in the lower levels by applying a Convolutional Neural Network (CNN) with a finetuning technique. The results using two benchmark datasets show that the proposed method, Hierarchical Fine-Tuning based CNN (HFTCNN) is competitive with the state-of-the-art CNN based methods.",
"title": ""
},
{
"docid": "2136c0e78cac259106d5424a2985e5d7",
"text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net",
"title": ""
},
{
"docid": "bb80720ee3797314c71cf33f984ac094",
"text": "This article reviews eight proposed strategies for solving the Symbol Grounding Problem (SGP), which was given its classic formulation in Harnad (1990). After a concise introduction, we provide an analysis of the requirement that must be satisfied by any hypothesis seeking to solve the SGP, the zero semantical commitment condition. We then use it to assess the eight strategies, which are organised into three main approaches: representationalism, semi-representationalism and nonrepresentationalism. The conclusion is that all the strategies are semantically committed and hence that none of them provides a valid solution to the SGP, which remains an open problem.",
"title": ""
},
{
"docid": "e28b0ab1bedd60ba83b8a575431ad549",
"text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.",
"title": ""
},
{
"docid": "cf24e793c307a7a6af53f160012ee926",
"text": "This work presents a single- and dual-port fully integrated millimeter-wave ultra-broadband vector network analyzer. Both circuits, realized in a commercial 0.35-μm SiGe:C technology with an ft/fmax of 170/250 GHz, cover an octave frequency bandwidth between 50-100 GHz. The presented chips can be configured to measure complex scattering parameters of external devices or determine the permittivity of different materials using an integrated millimeter-wave dielectric sensor. Both devices are based on a heterodyne architecture that achieves a receiver dynamic range of 57-72.5 dB over the complete design frequency range. Two integrated frequency synthesizer modules are included in each chip that enable the generation of the required test and local-oscillator millimeter-wave signals. A measurement 3σ statistical phase error lower than 0.3 ° is achieved. Automated measurement of changes in the dielectric properties of different materials is demonstrated using the proposed systems. The single- and dual-port network analyzer chips have a current consumption of 600 and 700 mA, respectively, drawn from a single 3.3-V supply.",
"title": ""
},
{
"docid": "472f2d8adb1c35fa7d4195323e53a8c2",
"text": "Serverless computing promises to provide applications with cost savings and extreme elasticity. Unfortunately, slow application and container initialization can hurt common-case latency on serverless platforms. In this work, we analyze Linux container primitives, identifying scalability bottlenecks related to storage and network isolation. We also analyze Python applications from GitHub and show that importing many popular libraries adds about 100 ms to startup. Based on these findings, we implement SOCK, a container system optimized for serverless workloads. Careful avoidance of kernel scalability bottlenecks gives SOCK an 18× speedup over Docker. A generalized-Zygote provisioning strategy yields an additional 3× speedup. A more sophisticated three-tier caching strategy based on Zygotes provides a 45× speedup over SOCK without Zygotes. Relative to AWS Lambda and OpenWhisk, OpenLambda with SOCK reduces platform overheads by 2.8× and 5.3× respectively in an image processing case study.",
"title": ""
},
{
"docid": "3cbc035529138be1d6f8f66a637584dd",
"text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.",
"title": ""
},
{
"docid": "6789e2e452a19da3a00b95a27994ee62",
"text": "Reflection in healthcare education is an emerging topic with many recently published studies and reviews. This current systematic review of reviews (umbrella review) of this field explores the following aspects: which definitions and models are currently in use; how reflection impacts design, evaluation, and assessment; and what future challenges must be addressed. Nineteen reviews satisfying the inclusion criteria were identified. Emerging themes include the following: reflection is currently regarded as self-reflection and critical reflection, and the epistemology-of-practice notion is less in tandem with the evidence-based medicine paradigm of modern science than expected. Reflective techniques that are recognised in multiple settings (e.g., summative, formative, group vs. individual) have been associated with learning, but assessment as a research topic, is associated with issues of validity, reliability, and reproducibility. Future challenges include the epistemology of reflection in healthcare education and the development of approaches for practising and assessing reflection without loss of theoretical background.",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "6b2c009eca44ea374bb5f1164311e593",
"text": "The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.",
"title": ""
},
{
"docid": "7974d3e3e9c431256ee35c3032288bd1",
"text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.",
"title": ""
},
{
"docid": "8addf385803074288c1a07df92ed1b9f",
"text": "In a permanent magnet synchronous motor where inductances vary as a function of rotor angle, the 2 phase (d-q) equivalent circuit model is commonly used for simplicity and intuition. In this article, a two phase model for a PM synchronous motor is derived and the properties of the circuits and variables are discussed in relation to the physical 3 phase entities. Moreover, the paper suggests methods of obtaining complete model parameters from simple laboratory tests. Due to the lack of developed procedures in the past, obtaining model parameters were very difficult and uncertain, because some model parameters are not directly measurable and vary depending on the operating conditions. Formulation is mainly for interior permanent magnet synchronous motors but can also be applied to surface permanent magnet motors.",
"title": ""
},
{
"docid": "1d084096acea83a62ecc6b010b302622",
"text": "The investigation of human activity patterns from location-based social networks like Twitter is an established approach of how to infer relationships and latent information that characterize urban structures. Researchers from various disciplines have performed geospatial analysis on social media data despite the data’s high dimensionality, complexity and heterogeneity. However, user-generated datasets are of multi-scale nature, which results in limited applicability of commonly known geospatial analysis methods. Therefore in this paper, we propose a geographic, hierarchical self-organizing map (Geo-H-SOM) to analyze geospatial, temporal and semantic characteristics of georeferenced tweets. The results of our method, which we validate in a case study, demonstrate the ability to explore, abstract and cluster high-dimensional geospatial and semantic information from crowdsourced data. ARTICLE HISTORY Received 8 April 2015 Accepted 19 September 2015",
"title": ""
},
{
"docid": "cec2212f74766872cb46947f59f355a9",
"text": "A Boltzmann game is an n-player repeated game, in which Boltzmann machines are employed by players to choose their optimal strategy for each round of the game. Players only have knowledge about the machine they have selected and their own strategy set. Information about other the players and the game’s pay-off function are concealed from all players. Players therefore select their strategies independent of the choices made by their opponents. A player’s pay-off, on the other hand, will be affected by the choices made by other players playing the game. As an example of this game, we play a repeated zero-sum matrix game between two Boltzmann machines. We show that a saddle point will exist for this type of Boltzmann game.",
"title": ""
},
{
"docid": "b397d82e24f527148cb46fbabda2b323",
"text": "This paper describes Illinois corn yield estimation using deep learning and another machine learning, SVR. Deep learning is a technique that has been attracting attention in recent years of machine learning, it is possible to implement using the Caffe. High accuracy estimation of crop yield is very important from the viewpoint of food security. However, since every country prepare data inhomogeneously, the implementation of the crop model in all regions is difficult. Deep learning is possible to extract important features for estimating the object from the input data, so it can be expected to reduce dependency of input data. The network model of two InnerProductLayer was the best algorithm in this study, achieving RMSE of 6.298 (standard value). This study highlights the advantages of deep learning for agricultural yield estimating.",
"title": ""
},
{
"docid": "12f8d5a55ba9b1e773fbab5429880db6",
"text": "Addiction is associated with neuroplasticity in the corticostriatal brain circuitry that is important for guiding adaptive behaviour. The hierarchy of corticostriatal information processing that normally permits the prefrontal cortex to regulate reinforcement-seeking behaviours is impaired by chronic drug use. A failure of the prefrontal cortex to control drug-seeking behaviours can be linked to an enduring imbalance between synaptic and non-synaptic glutamate, termed glutamate homeostasis. The imbalance in glutamate homeostasis engenders changes in neuroplasticity that impair communication between the prefrontal cortex and the nucleus accumbens. Some of these pathological changes are amenable to new glutamate- and neuroplasticity-based pharmacotherapies for treating addiction.",
"title": ""
},
{
"docid": "1f4c0407c8da7b5fe685ad9763be937b",
"text": "As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called DROIDGUARD, combines static code analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, DROIDGUARD has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.",
"title": ""
},
{
"docid": "2c4a2d41653f05060ff69f1c9ad7e1a6",
"text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.",
"title": ""
}
] |
scidocsrr
|
16ea2e00dc098cc1c71b4f810a20e172
|
Cyber-bullying taxonomies: Definition, forms, consequences and mitigation strategies
|
[
{
"docid": "64bdb5647b7b05c96de8c0d8f6f00eed",
"text": "Cyberbullying is a reality of the digital age. To address this phenomenon, it becomes imperative to understand exactly what cyberbullying is. Thus, establishing a workable and theoretically sound definition is essential. This article contributes to the existing literature in relation to the definition of cyberbullying. The specific elements of repetition, power imbalance, intention, and aggression, regarded as essential criteria of traditional face-to-face bullying, are considered in the cyber context. It is posited that the core bullying elements retain their importance and applicability in relation to cyberbullying. The element of repetition is in need of redefining, given the public nature of material in the online environment. In this article, a clear distinction between direct and indirect cyberbullying is made and a model definition of cyberbullying is offered. Overall, the analysis provided lends insight into how the essential bullying elements have evolved and should apply in our parallel cyber universe.",
"title": ""
},
{
"docid": "117f529b96afc67e1a9ba3058f83049f",
"text": "Data from 53 focus groups, which involved students from 10 to 18 years old, show that youngsters often interpret \"cyberbullying\" as \"Internet bullying\" and associate the phenomenon with a wide range of practices. In order to be considered \"true\" cyberbullying, these practices must meet several criteria. They should be intended to hurt (by the perpetrator) and perceived as hurtful (by the victim); be part of a repetitive pattern of negative offline or online actions; and be performed in a relationship characterized by a power imbalance (based on \"real-life\" power criteria, such as physical strength or age, and/or on ICT-related criteria such as technological know-how and anonymity).",
"title": ""
},
{
"docid": "06b0708250515510b8a3fc302045fe4b",
"text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.",
"title": ""
}
] |
[
{
"docid": "9fac5ac1de2ae70964bdb05643d41a68",
"text": "A long-standing goal in the field of artificial intelligence is to develop agents that can perceive and understand the rich visual world around us and who can communicate with us about it in natural language. Significant strides have been made towards this goal over the last few years due to simultaneous advances in computing infrastructure, data gathering and algorithms. The progress has been especially rapid in visual recognition, where computers can now classify images into categories with a performance that rivals that of humans, or even surpasses it in some cases such as classifying breeds of dogs. However, despite much encouraging progress, most of the advances in visual recognition still take place in the context of assigning one or a few discrete labels to an image (e.g. person, boat, keyboard, etc.). In this dissertation we develop models and techniques that allow us to connect the domain of visual data and the domain of natural language utterances, enabling translation between elements of the two domains. In particular, first we introduce a model that embeds both images and sentences into a common multi-modal embedding space. This space then allows us to identify images that depict an arbitrary sentence description and conversely, we can identify sentences that describe any image. Second, we develop an image captioning model that takes an image and directly generates a sentence description without being constrained a finite collection of human-written sentences to choose from. Lastly, we describe a model that can take an image and both localize and describe all if its salient parts. We demonstrate that this model can also be used backwards to take any arbitrary description (e.g. white tennis shoes) and e ciently localize the described concept in a large collection of images. We argue that these models, the techniques they take advantage of internally and the interactions they enable are a stepping stone towards artificial intelligence and that connecting images and natural language o↵ers many practical benefits and immediate valuable applications. From the modeling perspective, instead of designing and staging explicit algorithms to process images and sentences in complex processing pipelines, our contribution lies in the design of hybrid convolutional and recurrent neural network architectures that connect visual data and natural language utterances with a single network. Therefore, the computational processing of images,",
"title": ""
},
{
"docid": "a14ac26274448e0a7ecafdecae4830f9",
"text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.",
"title": ""
},
{
"docid": "6fed39aba9c72f21c553a82d97a2cb23",
"text": "This paper presents a position sensorless closed loop control of a switched reluctance linear motor. The aim of the proposed control is to damp the position of the studied motor. Indeed, the position oscillations can harm some applications requiring high position precision. Moreover, they can induce the linear switched reluctance motor to an erratic working. The proposed control solution is based on back Electromotive Forces which give information about the oscillatory behaviour of the studied motor and avoid the use of a cumbersome and expensive position linear sensor. The determination of the designed control law parameters was based on the singular perturbation theory. The efficiency of the proposed control solution was proven by simulations and experimental tests.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "327042fae16e69b15a4e8ea857ccdb18",
"text": "Do countries with lower policy-induced barriers to international trade grow faster, once other relevant country characteristics are controlled for? There exists a large empirical literature providing an affirmative answer to this question. We argue that methodological problems with the empirical strategies employed in this literature leave the results open to diverse interpretations. In many cases, the indicators of \"openness\" used by researchers are poor measures of trade barriers or are highly correlated with other sources of bad economic performance. In other cases, the methods used to ascertain the link between trade policy and growth have serious shortcomings. Papers that we review include Dollar (1992), Ben-David (1993), Sachs and Warner (1995), and Edwards (1998). We find little evidence that open trade policies--in the sense of lower tariff and non-tariff barriers to trade--are significantly associated with economic growth. Francisco Rodríguez Dani R odrik Department of Economics John F. Kennedy School of Government University of Maryland Harvard University College Park, MD 20742 79 Kennedy Street Cambridge, MA 02138 Phone: (301) 405-3480 Phone: (617) 495-9454 Fax: (301) 405-3542 Fax: (617) 496-5747 TRADE POLICY AND ECONOMIC GROWTH: A SKEPTIC'S GUIDE TO THE CROSS-NATIONAL EVIDENCE \"It isn't what we don't know that kills us. It's what we know that ain't so.\" -Mark Twain",
"title": ""
},
{
"docid": "5b545c14a8784383b8d921eb27991749",
"text": "In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500.",
"title": ""
},
{
"docid": "ed012eec144e6f2f0257141404563928",
"text": "This paper presents a new direct active and reactive power control (DPC) of grid-connected doubly fed induction generator (DFIG)-based wind turbine systems. The proposed DPC strategy employs a nonlinear sliding-mode control scheme to directly calculate the required rotor control voltage so as to eliminate the instantaneous errors of active and reactive powers without involving any synchronous coordinate transformations. Thus, no extra current control loops are required, thereby simplifying the system design and enhancing the transient performance. Constant converter switching frequency is achieved by using space vector modulation, which eases the designs of the power converter and the ac harmonic filter. Simulation results on a 2-MW grid-connected DFIG system are provided and compared with those of classic voltage-oriented vector control (VC) and conventional lookup table (LUT) DPC. The proposed DPC provides enhanced transient performance similar to the LUT DPC and keeps the steady-state harmonic spectra at the same level as the VC strategy.",
"title": ""
},
{
"docid": "900190a904f64de86745048eabc630b8",
"text": "A new methodology for designing and implementing high-efficiency broadband Class-E power amplifiers (PAs) using high-order low-pass filter-prototype is proposed in this paper. A GaN transistor is used in this work, which is carefully modeled and characterized to prescribe the optimal output impedance for the broadband Class-E operation. A sixth-order low-pass filter-matching network is designed and implemented for the output matching, which provides optimized fundamental and harmonic impedances within an octave bandwidth (L-band). Simulation and experimental results show that an optimal Class-E PA is realized from 1.2 to 2 GHz (50%) with a measured efficiency of 80%-89%, which is the highest reported today for such a bandwidth. An overall PA bandwidth of 0.9-2.2 GHz (84%) is measured with 10-20-W output power, 10-13-dB gain, and 63%-89% efficiency throughout the band. Furthermore, the Class-E PA is characterized through measurements using constant-envelop global system for mobile communications signals, indicating a favorable adjacent channel power ratio from -40 to -50 dBc within the entire bandwidth.",
"title": ""
},
{
"docid": "f622860032b9a4dd054082be0741f18d",
"text": "Full Metal Jacket is a general-purpose visual dataflow language currently being developed on top of Emblem, a Lisp dialect strongly influenced by Common Lisp but smaller and more type-aware, and with support for CLOS-style object orientation, graphics, event handling and multi-threading. Methods in Full Metal Jacket Jacket are directed acyclic graphs. Data arriving at ingates from the calling method flows along edges through vertices, at which it gets transformed by applying Emblem functions or methods, or methods defined in Full Metal Jacket, before it finally arrives at outgates where it is propagated back upwards to the calling method. The principal difference between Full Metal Jacket and existing visual dataflow languages such as Prograph is that Full Metal Jacket is a pure dataflow language, with no special syntax being required for control constructs such as loops or conditionals, which resemble ordinary methods except in the number of times they generate outputs. This uniform syntax means that, like Lisp and Prolog, methods in Full Metal Jacket are themselves data structures and can be manipulated as such.",
"title": ""
},
{
"docid": "00bcab0936aa36b94b67ce38fc89cd2e",
"text": "Introduction Forth interpreters can utilize several techniques for implementing threaded code. We will classify these techniques to better understand the mechanisms underlying \"threaded interpretive languages\", or TILs. One basic assumption we will make is that the TIL is implemented on a typical microprocessor (which is usually the case). The following are the elements of any threaded interpretive language. These must be designed together to make the interpreter work.",
"title": ""
},
{
"docid": "92207faaa63e33f51a5c924dbbd4855a",
"text": "A significant body of research, spanning approximately the last 25 years, has focused upon the task of developing a better understanding of tumor growth through the use of in vitro mathematical models. Although such models are useful for simulation, in vivo growth differs in significant ways due to the variety of competing biological, biochemical, and mechanical factors present in a living biological system. An in vivo, macroscopic, primary brain tumor growth model is developed, incorporating previous in vitro growth pattern research as well as scientific investigations into the biological and biochemical factors that affect in vivo neoplastic growth. The tumor growth potential model presents an integrated, universal framework that can be employed to predict the direction and extent of spread of a primary brain tumor with respect to time for a specific patient. This framework may be extended as necessary to include the results of current and future research into parameters affecting neoplastic proliferation. The patient-specific primary brain tumor growth model is expected to have multiple clinical uses, including: predictive modeling, tumor boundary delineation, growth pattern research, improved radiation surgery planning, and expert diagnostic assistance.",
"title": ""
},
{
"docid": "7631efaa3ee171a320bd6173a3cfc3fd",
"text": "In his classic article D’Amico states: “When canines are in normal interlocking position, the lateral and forward movement is limited so that when an attempt is made to move the mandible laterally or forward, there is an involuntary reaction when the canines come in contact. The reaction is an immediate break in the tension of the temporal and masseter muscles, thus reducing the magnitude of the applied force. Regardless of how hard the individual tries to tense these muscles, as long as the canines are in contact, it is impossible for these muscles to assume full tension.” He continues: “The length of the roots of the canines and the anatomical structure of the supporting alveolar process gives testimony to nature’s intention as to the function intended. What may appear as trauma as they come in contact is not trauma at all, because when contact is made, muscular tension is involuntarily reduced, thus reducing the magnitude of applied force.”",
"title": ""
},
{
"docid": "6c8b83e0e02e5c0230d57e4885d27e02",
"text": "Contemporary conceptions of physical education pedagogy stress the importance of considering students’ physical, affective, and cognitive developmental states in developing curricula (Aschebrock, 1999; Crum, 1994; Grineski, 1996; Humel, 2000; Hummel & Balz, 1995; Jones & Ward, 1998; Kurz, 1995; Siedentop, 1996; Virgilio, 2000). Sport and physical activity preference is one variable that is likely to change with development. Including activities preferred by girls and boys in physical education curricula could produce several benefits, including greater involvement in lessons and increased enjoyment of physical education (Derner, 1994; Greenwood, Stillwell, & Byars, 2000; Knitt et al., 2000; Lee, Fredenburg, Belcher, & Cleveland, 1999; Sass H. & Sass I., 1986; Strand & Scatling, 1994; Volke, Poszony, & Stumpf, 1985). These are significant goals, because preference for physical activity and enjoyment of physical education are important predictors for overall physical activity participation (Sallis et al., 1999a, b). Although physical education curricula should be based on more than simply students’ preferences, student preferences can inform the design of physical education, other schoolbased physical activity programs, and programs sponsored by other agencies. Young people’s physical activity and sport preferences are likely to vary by age, sex, socio-economic status and nationality. Although several studies have been conducted over many years (Greller & Cochran, 1995; Hoffman & Harris, 2000; Kotonski-Immig, 1994; Lamprecht, Ruschetti, & Stamm, 1991; Strand & Scatling, 1994; Taks, Renson, & Vanreusel, 1991; Telama, 1978; Walton et al., 1999), current understanding of children’s preferences in specific sports and movement activities is limited. One of the main limitations is the cross-sectional nature of the data, so the stability of sport and physical activity preferences over time is not known. The main aim of the present research is to describe the levels and trends in the development of sport and physical activity preferences in girls and boys over a period of five years, from the age of 10 to 14. Further, the study aims to establish the stability of preferences over time.",
"title": ""
},
{
"docid": "9b2066a48425cee0d2e31a48e13e5456",
"text": "© 2013 Emerenciano et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Biofloc Technology (BFT): A Review for Aquaculture Application and Animal Food Industry",
"title": ""
},
{
"docid": "5585cc22a0af9cf00656ac04b14ade5a",
"text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.",
"title": ""
},
{
"docid": "b45608b866edf56dbafe633824719dd6",
"text": "classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.",
"title": ""
},
{
"docid": "165aa4bad30a95866be4aff878fbd2cf",
"text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to joshua.gans@gmail.com.",
"title": ""
},
{
"docid": "ce098e1e022235a2c322a231bff8da6c",
"text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.",
"title": ""
},
{
"docid": "5b73883a0bec8434fef8583143dac645",
"text": "RC4 is the most widely deployed stream cipher in software applications. In this paper we describe a major statistical weakness in RC4, which makes it trivial to distinguish between short outputs of RC4 and random strings by analyzing their second bytes. This weakness can be used to mount a practical ciphertext-only attack on RC4 in some broadcast applications, in which the same plaintext is sent to multiple recipients under different keys.",
"title": ""
},
{
"docid": "af359933fad5d689718e2464d9c4966c",
"text": "Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.",
"title": ""
}
] |
scidocsrr
|
245988ae1d9ae4110048135ec0581fb2
|
Multimethod Longitudinal HIV Drug Resistance Analysis in Antiretroviral-Therapy-Naive Patients.
|
[
{
"docid": "7fe1cea4990acabf7bc3c199d3c071ce",
"text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.",
"title": ""
}
] |
[
{
"docid": "1390f0c41895ecabbb16c54684b88ca1",
"text": "Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that stateof-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.",
"title": ""
},
{
"docid": "d6dba7a89bc123bc9bb616df6faee2bc",
"text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.",
"title": ""
},
{
"docid": "7064d73864a64e2b76827e3252390659",
"text": "Abstmct-In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed that the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l -(l/n) log,, S,) log, 27 bits/symbol. If the subject does npt hnow the true probabllty distribution for the stochastic process, then Z&(X! ls an asymptotic upper bound for the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true entropy of the process. Moreovzr, lf X is ergodic, then by the SLOW McMilhm-Brebnan theorem H,,(X)+H(X) with probability one. Preliminary indications are that English text has au entropy of approximately 1.3 bits/symbol, which agrees well with Shannon’s estimate.",
"title": ""
},
{
"docid": "ac1f2a1a96ab424d9b69276efd4f1ed4",
"text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.",
"title": ""
},
{
"docid": "cf131167592f02790a1b4e38ed3b5375",
"text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.",
"title": ""
},
{
"docid": "7cd091555dd870cc1a71a4318bb5ff8d",
"text": "This paper presents the design and simulation of a wideband, medium gain, light weight, wide bandwidth pyramidal horn antenna feed for microwave applications. The horn was designed using approximation method to calculate the gain in mat lab and simulated using CST microwave studio. The proposed antenna operates within 1-2 GHz (L-band). The horn is supported by a rectangular wave guide. It is linearly polarized and shows wide bandwidth with a gain of 15.3dB. The horn is excited with the monopole which is loaded with various top hat loading such as rectangular disc, circular disc, annular disc, L-type, T-type, Cone shape, U-shaped plates etc. and checked their performances for return loss as well as bandwidth. The circular disc and annular ring gives the low return loss and wide bandwidth as well as low VSWR. The annular ring gave good VSWR and return loss compared to the circular disc. The far field radiation pattern is obtained as well as Efield & H-field analysis for L-band pyramidal horn has been observed, simulated and optimized using CST Microwave Studio. The simulation results show that the pyramidal horn structure exhibits low VSWR as well as good radiation pattern over L-band.",
"title": ""
},
{
"docid": "55ada092fd628aead0fd64d20eff7b69",
"text": "BER estimation from measured EVM values is shown experimentally for QPSK and 16QAM optical signals with 28 GBd. Various impairments, such as gain imbalance, quadrature error and timing skew, are introduced into the transmitted signal in order to evaluate the robustness of the method. The EVM was measured using two different real-time sampling systems and the EVM measurement accuracy is discussed.",
"title": ""
},
{
"docid": "6d552edc0d60470ce942b9d57b6341e3",
"text": "A rich element of cooperative games are mechanics that communicate. Unlike automated awareness cues and synchronous verbal communication, cooperative communication mechanics enable players to share information and direct action by engaging with game systems. These include both explicitly communicative mechanics, such as built-in pings that direct teammates' attention to specific locations, and emergent communicative mechanics, where players develop their own conventions about the meaning of in-game activities, like jumping to get attention. We use a grounded theory approach with 40 digital games to identify and classify the types of cooperative communication mechanics game designers might use to enable cooperative play. We provide details on the classification scheme and offer a discussion on the implications of cooperative communication mechanics.",
"title": ""
},
{
"docid": "aa5daa83656a2265dc27ec6ee5e3c1cb",
"text": "Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGCbased customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely morevaluable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).",
"title": ""
},
{
"docid": "4d5119db64e4e0a31064bd22b47e2534",
"text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.",
"title": ""
},
{
"docid": "6d3410de121ffe037eafd5f30daa7252",
"text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.",
"title": ""
},
{
"docid": "ce99ce3fb3860e140164e7971291f0fa",
"text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.",
"title": ""
},
{
"docid": "e06646b7d2bd6ee83c4d557f4215e143",
"text": "Deep generative models have been praised for their ability to learn smooth latent representation of images, text, and audio, which can then be used to generate new, plausible data. However, current generative models are unable to work with graphs due to their unique characteristics—their underlying structure is not Euclidean or grid-like, they remain isomorphic under permutation of the nodes labels, and they come with a different number of nodes and edges. In this paper, we propose NeVAE, a novel variational autoencoder for graphs, whose encoder and decoder are specially designed to account for the above properties by means of several technical innovations. In addition, by using masking, the decoder is able to guarantee a set of local structural and functional properties in the generated graphs. Experiments reveal that our model is able to learn and mimic the generative process of several well-known random graph models and can be used to discover new molecules more effectively than several state of the art methods. Moreover, by utilizing Bayesian optimization over the continuous latent representation of molecules our model finds, we can also identify molecules that maximize certain desirable properties more effectively than alternatives.",
"title": ""
},
{
"docid": "ddb0a3bc0a9367a592403d0fc0cec0a5",
"text": "Fluorescence microscopy is a powerful quantitative tool for exploring regulatory networks in single cells. However, the number of molecular species that can be measured simultaneously is limited by the spectral overlap between fluorophores. Here we demonstrate a simple but general strategy to drastically increase the capacity for multiplex detection of molecules in single cells by using optical super-resolution microscopy (SRM) and combinatorial labeling. As a proof of principle, we labeled mRNAs with unique combinations of fluorophores using fluorescence in situ hybridization (FISH), and resolved the sequences and combinations of fluorophores with SRM. We measured mRNA levels of 32 genes simultaneously in single Saccharomyces cerevisiae cells. These experiments demonstrate that combinatorial labeling and super-resolution imaging of single cells is a natural approach to bring systems biology into single cells.",
"title": ""
},
{
"docid": "7a67bccffa6222f8129a90933962e285",
"text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.",
"title": ""
},
{
"docid": "25c41bdba8c710b663cb9ad634b7ae5d",
"text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002",
"title": ""
},
{
"docid": "fa63fbdfc0be5f2675c5f65ee0798f88",
"text": "Twitter is a micro blogging site where users review or tweet their approach i.e., opinion towards the service providers twitter page in words and it is useful to analyze the sentiments from it. Analyze means finding approach of users or customers where it is positive, negative, neutral, or in between positive-neutral or in between negative-neutral and represent it. In such a system or tool tweets are fetch from twitter regarding shopping websites, or any other twitter pages like some business, mobile brands, cloth brands, live events like sport match, election etc. get the polarity of it. These results will help the service provider to find out about the customers view toward their products.",
"title": ""
},
{
"docid": "db806183810547435075eb6edd28d630",
"text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.",
"title": ""
},
{
"docid": "a2c7ee4e586bc456ad6bfcdf3b1cc84b",
"text": "We present a taxonomy of the Artificial Intelligence (AI) methods currently applied for algorithmic music composition. The area known as algorithmic music composition concerns the research on processes of composing music pieces automatically by a computer system. The use of AI for algorithmic music consists on the application of AI techniques as the main tools for the composition generation. There are several models of AI used in music composition such as: heuristics in evolutionary algorithms, neural networks, stochastic methods, generative models, agents, decision trees, declarative programming and grammatical representation. In this survey we present the trending in techniques for automatic music composition. We summarized several research projects of the last seven years and highlight the directions of music composition based on AI techniques.",
"title": ""
},
{
"docid": "7feea3bcba08a889ba779a23f79556d7",
"text": "In this report, monodispersed ultra-small Gd2O3 nanoparticles capped with hydrophobic oleic acid (OA) were synthesized with average particle size of 2.9 nm. Two methods were introduced to modify the surface coating to hydrophilic for bio-applications. With a hydrophilic coating, the polyvinyl pyrrolidone (PVP) coated Gd2O3 nanoparticles (Gd2O3-PVP) showed a reduced longitudinal T1 relaxation time compared with OA and cetyltrimethylammonium bromide (CTAB) co-coated Gd2O3 (Gd2O3-OA-CTAB) in the relaxation study. The Gd2O3-PVP was thus chosen for its further application study in MRI with an improved longitudinal relaxivity r1 of 12.1 mM(-1) s(-1) at 7 T, which is around 3 times as that of commercial contrast agent Magnevist(®). In vitro cell viability in HK-2 cell indicated negligible cytotoxicity of Gd2O3-PVP within preclinical dosage. In vivo MR imaging study of Gd2O3-PVP nanoparticles demonstrated considerable signal enhancement in the liver and kidney with a long blood circulation time. Notably, the OA capping agent was replaced by PVP through ligand exchange on the Gd2O3 nanoparticle surface. The hydrophilic PVP grants the Gd2O3 nanoparticles with a polar surface for bio-application, and the obtained Gd2O3-PVP could be used as an in vivo indicator of reticuloendothelial activity.",
"title": ""
}
] |
scidocsrr
|
e7039d49d9949422b44e0a2def7834e2
|
Automatic Transcription of Guitar Chords and Fingering From Audio
|
[
{
"docid": "e8933b0afcd695e492d5ddd9f87aeb81",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
}
] |
[
{
"docid": "63c62168e217ed4c50cf5dba6a187722",
"text": "Statistics is an important part in big data because many statistical methods are used for big data analysis. The aim of statistics is to estimate population using the sample extracted from the population, so statistics is to analyze not the population but the sample. But in big data environment, we can get the big data set closed to the population by the advanced computing systems such as cloud computing and high-speed internet. According to the circumstances, we can analyze entire part of big data like the population of statistics. But we may be impossible to analyze the entire data because of its huge data volume. So, in this paper, we propose a new analytical methodology for big data analysis in regression problem for reducing the computing burden. We call this a divided regression analysis. To verify the performance of our divided regression model, we carry out experiment and simulation.",
"title": ""
},
{
"docid": "1184260e77b2f6eaab97c0b9e2a43afc",
"text": "In pervasive and ubiquitous computing systems, human activity recognition has immense potential in a large number of application domains. Current activity recognition techniques (i) do not handle variations in sequence, concurrency and interleaving of complex activities; (ii) do not incorporate context; and (iii) require large amounts of training data. There is a lack of a unifying theoretical framework which exploits both domain knowledge and data-driven observations to infer complex activities. In this article, we propose, develop and validate a novel Context-Driven Activity Theory (CDAT) for recognizing complex activities. We develop a mechanism using probabilistic and Markov chain analysis to discover complex activity signatures and generate complex activity definitions. We also develop a Complex Activity Recognition (CAR) algorithm. It achieves an overall accuracy of 95.73% using extensive experimentation with real-life test data. CDAT utilizes context and links complex activities to situations, which reduces inference time by 32.5% and also reduces training data by 66%.",
"title": ""
},
{
"docid": "1cf07400a152ea6bfac75c75bfb1eb7b",
"text": "Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.",
"title": ""
},
{
"docid": "bdfa9a484a2bca304c0a8bbd6dcd7f1a",
"text": "We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.",
"title": ""
},
{
"docid": "b8c5aa7628cf52fac71b31bb77ccfac0",
"text": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.",
"title": ""
},
{
"docid": "a4e1f420dfc3b1b30a58ec3e60288761",
"text": "Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.",
"title": ""
},
{
"docid": "073756896638d2846da173eec98bd8db",
"text": "The DJI Phantom III drone has already been used for malicious activities (to drop bombs, remote surveillance and plane watching) in 2016 and 2017. At the time of writing, DJI was the drone manufacturer with the largest market share. Our work presents the primary thorough forensic analysis of the DJI Phantom III drone, and the primary account for proprietary file structures stored by the examined drone. It also presents the forensically sound open source tool DRone Open source Parser (DROP) that parses proprietary DAT files extracted from the drone's nonvolatile internal storage. These DAT files are encrypted and encoded. The work also shares preliminary findings on TXT files, which are also proprietary, encrypted, encoded, files found on the mobile device controlling the drone. These files provided a slew of data such as GPS locations, battery, flight time, etc. By extracting data from the controlling mobile device, and the drone, we were able to correlate data and link the user to a specific device based on extracted metadata. Furthermore, results showed that the best mechanism to forensically acquire data from the tested drone is to manually extract the SD card by disassembling the drone. Our findings illustrated that the drone should not be turned on as turning it on changes data on the drone by creating a new DAT file, but may also delete stored data if the drone's internal storage is full. © 2017 The Author(s). Published by Elsevier Ltd. on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "0a3713459412d3278a19a3ff8855a6ba",
"text": "a Universidad Autónoma del Estado de Hidalgo, Escuela Superior de Tizayuca, Carretera Federal Pachuca – Tizayuca km 2.5, CP 43800, Tizayuca, Hidalgo, Mexico b Universidad Autónoma del Estado de México, Av. Jardín Zumpango s/n, Fraccionamiento El Tecojote, CP 56259, Texcoco-Estado de México, Mexico c Centro de Investigación y de Estudios Avanzados del IPN, Departamento de Computación, Av. Instituto Politécnico Nacional 2508, San Pedro Zacatenco, CP 07360, México DF, Mexico",
"title": ""
},
{
"docid": "12b94323c586de18e8de02e5a065903d",
"text": "Species of lactic acid bacteria (LAB) represent as potential microorganisms and have been widely applied in food fermentation worldwide. Milk fermentation process has been relied on the activity of LAB, where transformation of milk to good quality of fermented milk products made possible. The presence of LAB in milk fermentation can be either as spontaneous or inoculated starter cultures. Both of them are promising cultures to be explored in fermented milk manufacture. LAB have a role in milk fermentation to produce acid which is important as preservative agents and generating flavour of the products. They also produce exopolysaccharides which are essential as texture formation. Considering the existing reports on several health-promoting properties as well as their generally recognized as safe (GRAS) status of LAB, they can be widely used in the developing of new fermented milk products.",
"title": ""
},
{
"docid": "a20302dfa51ad50db7d67526f9390743",
"text": "Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance; we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "acf4645478c28811d41755b0ed81fb39",
"text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.",
"title": ""
},
{
"docid": "440858614aba25dfa9039b20a1caefc4",
"text": "A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image verify the interpretability of RTT-GAN.",
"title": ""
},
{
"docid": "ff429302ec983dd1203ac6dd97506ef8",
"text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
},
{
"docid": "b4b4a50e4fa554b8e155a18f80b5744e",
"text": "Recent advances in software-defined networking (SDN), particularly OpenFlow [5], have made it possible to implement and deploy sophisticated network policies with relatively simple programs. The simplicity arises in large part due to a simple “match/action” interface that OpenFlow provides, by which a programmer can specify actions to take on packets that match particular characteristics (e.g., “forward all port-53 traffic on a faster path”). To date, however, the space of such policies that can be easily implemented in an SDN centers on the control plane—while OpenFlow provides a rich control plane API, it permits very narrow control on the data plane. Expanding the match/action interface could make it possible for network operators to implement more sophisticated policies, e.g., that perform deep packet inspection and operate at the application layer. Yet, expanding OpenFlow’s specification is an arduous process, requiring standardization and hardware support—going down that path would, we believe, ultimately result in vertically integrated hardware, the very fate that OpenFlow was arguably designed to avoid. On the other end of the spectrum we have middleboxes: computing devices that sit on traffic flows’ paths, and that have no inherent restrictions on what they can process or store. Middleboxes have historically been vertically integrated, thus, although middlebox manufacturers can create a wide range of data processing devices, network operators remain faced with several key challenges:",
"title": ""
},
{
"docid": "07b889a2b1a18bc1f91021f3b889474a",
"text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.",
"title": ""
},
{
"docid": "2e11a8170ec8b2547548091443d46cc6",
"text": "This chapter presents the theory of the Core Elements of the Gaming Experience (CEGE). The CEGE are the necessary but not sufficient conditions to provide a positive experience while playing video-games. This theory, formulated using qualitative methods, is presented with the aim of studying the gaming experience objectively. The theory is abstracted using a model and implemented in questionnaire. This chapter discusses the formulation of the theory, introduces the model, and shows the use of the questionnaire in an experiment to differentiate between two different experiences. In loving memory of Samson Cairns 4.1 The Experience of Playing Video-games The experience of playing video-games is usually understood as the subjective relation between the user and the video-game beyond the actual implementation of the game. The implementation is bound by the speed of the microprocessors of the gaming console, the ergonomics of the controllers, and the usability of the interface. Experience is more than that, it is also considered as a personal relationship. Understanding this relationship as personal is problematic under a scientific scope. Personal and subjective knowledge does not allow a theory to be generalised or falsified (Popper 1994). In this chapter, we propose a theory for understanding the experience of playing video-games, or gaming experience, that can be used to assess and compare different experiences. This section introduces the approach taken towards understanding the gaming experience under the aforementioned perspective. It begins by presenting an E.H. Calvillo-Gámez (B) División de Nuevas Tecnologías de la Información, Universidad Politécnica de San Luis Potosí, San Luis Potosí, México e-mail: e.calvillo@upslp.edu.mx 47 R. Bernhaupt (ed.), Evaluating User Experience in Games, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-963-3_4, C © Springer-Verlag London Limited 2010 48 E.H. Calvillo-Gámez et al. overview of video-games and user experience in order to familiarise the reader with such concepts. Last, the objective and overview of the whole chapter are presented. 4.1.",
"title": ""
},
{
"docid": "f4e6c9e4ed147a7864bd28d533b8ac38",
"text": "The Milky Way Galaxy contains an unknown number, N , of civilizations that emit electromagnetic radiation (of unknown wavelengths) over a finite lifetime, L. Here we are assuming that the radiation is not produced indefinitely, but within L as a result of some unknown limiting event. When a civilization stops emitting, the radiation continues traveling outward at the speed of light, c, but is confined within a shell wall having constant thickness, cL. We develop a simple model of the Galaxy that includes both the birthrate and detectable lifetime of civilizations to compute the possibility of a SETI detection at the Earth. Two cases emerge for radiation shells that are (1) thinner than or (2) thicker than the size of the Galaxy, corresponding to detectable lifetimes, L, less than or greater than the light-travel time, ∼ 100, 000 years, across the Milky Way, respectively. For case (1), each shell wall has a thickness smaller than the size of the Galaxy and intersects the galactic plane in a donut shape (annulus) that fills only a fraction of the Galaxy’s volume, inhibiting SETI detection. But the ensemble of such shell walls may still fill our Galaxy, and indeed may overlap locally, given a sufficiently high birthrate of detectable civilizations. In the second case, each radiation shell is thicker than the size of our Galaxy. Yet, the ensemble of walls may or may not yield a SETI detection depending on the civilization birthrate. We compare the number of different electromagnetic transmissions arriving at Earth to Drake’s N , the number of currently emitting civilizations, showing that they are equal to each other for both cases (1) and (2). However, for L < 100, 000 years, the transmissions arriving at Earth may come from distant civilizations long extinct, while civilizations still alive are sending signals yet to arrive.",
"title": ""
},
{
"docid": "c467fe65c242436822fd72113b99c033",
"text": "Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is a powerful technique for generating striking images of vector data. Based on local ltering of an input texture along a curved stream line segment in a vector eld, it is possible to depict directional information of the vector eld at pixel resolution. The methods suggested so far can handle structured grids only. Now we present an approach that works both on two-dimensional unstructured grids and directly on triangulated surfaces in three-dimensional space. Because unstructured meshes often occur in real applications, this feature makes LIC available for a number of new applications.",
"title": ""
},
{
"docid": "45ff2c8f796eb2853f75bedd711f3be4",
"text": "High-quality (<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula>) oscillators are notorious for being extremely slow during startup. Their long startup time increases the average power consumption in duty-cycled systems. This paper presents a novel precisely timed energy injection technique to speed up the startup behavior of high-<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> oscillators. The proposed solution is also insensitive to the frequency variations of the injection signal over a wide enough range that makes it possible to employ an integrated oscillator to provide the injection signal. A theoretical analysis is carried out to calculate the optimal injection duration. As a proof-of-concept, the proposed technique is incorporated in the design of crystal oscillators and is realized in a TSMC 65-nm CMOS technology. To verify the robustness of our technique across resonator parameters and frequency variations, six crystal resonators from different manufacturers with different packagings and <inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> factors were tested. The implemented IC includes multiple crystal oscillators at 1.84, 10, and 50 MHz frequencies, with measured startup times of 58, 10, and 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{s}$ </tex-math></inline-formula>, while consuming 6.7, 45.5, and 195 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{W}$ </tex-math></inline-formula> at steady state, respectively. To the authors’ best knowledge, this is the fastest, reported startup time in the literature, with >15<inline-formula> <tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> improvement over prior art, while requiring the smallest startup energy (~12 nJ).",
"title": ""
}
] |
scidocsrr
|
30a3d0b1d1884e3b6dcfde192afab4af
|
Visual Sentiment Prediction with Deep Convolutional Neural Networks
|
[
{
"docid": "fcbfa224b2708839e39295f24f4405e1",
"text": "A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult \"real-world\" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced andlor the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.",
"title": ""
}
] |
[
{
"docid": "9948ebbd2253021e3af53534619c5094",
"text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "0d733d7f0782bfaf245bf344a46b58b8",
"text": "Smart Cities rely on the use of ICTs for a more efficient and intelligent use of resources, whilst improving citizens' quality of life and reducing the environmental footprint. As far as the livability of cities is concerned, traffic is one of the most frequent and complex factors directly affecting citizens. Particularly, drivers in search of a vacant parking spot are a non-negligible source of atmospheric and acoustic pollution. Although some cities have installed sensor-based vacant parking spot detectors in some neighbourhoods, the cost of this approach makes it unfeasible at large scale. As an approach to implement a sustainable solution to the vacant parking spot detection problem in urban environments, this work advocates fusing the information from small-scale sensor-based detectors with that obtained from exploiting the widely-deployed video surveillance camera networks. In particular, this paper focuses on how video analytics can be exploited as a prior step towards Smart City solutions based on data fusion. Through a set of experiments carefully planned to replicate a real-world scenario, the vacant parking spot detection success rate of the proposed system is evaluated through a critical comparison of local and global visual features (either alone or fused at feature level) and different classifier systems applied to the task. Furthermore, the system is tested under setup scenarios of different complexities, and experimental results show that while local features are best when training with small amounts of highly accurate on-site data, they are outperformed by their global counterparts when training with more samples from an external vehicle database.",
"title": ""
},
{
"docid": "31bd49d9287ceaead298c4543c5b3c53",
"text": "In this paper, an experimental self-teaching system capable of superimposing audio-visual information to support the process of learning to play the guitar is proposed. Different learning scenarios have been carefully designed according to diverse levels of experience and understanding and are presented in a simple way. Learners can select between representative numbers of scenarios and physically interact with the audio-visual information in a natural way. Audio-visual information can be placed anywhere on a physical space and multiple sound sources can be mixed to experiment with compositions and compilations. To assess the effectiveness of the system some initial evaluation is conducted. Finally conclusions and future work of the system are summarized. Categories: augmented reality, information visualisation, human-computer interaction, learning.",
"title": ""
},
{
"docid": "6f4d3ab2b3d027fdbae1b7381409265c",
"text": "BACKGROUND\nIn 1987 individual states in the USA were allowed to raise speed limits on rural freeways from 55 to 65 mph. Analyses of the impact of the increased speed limits on highway safety have produced conflicting results.\n\n\nOBJECTIVE\nTo determine if the 1987 speed limit increase on Washington State's rural freeways affected the incidence of fatal crashes or all crashes on rural freeways, or affected average vehicle speeds or speed variance.\n\n\nDESIGN\nAn ecological study of crashes and vehicle speeds on Washington State freeways from 1974 through 1994.\n\n\nRESULTS\nThe incidence of fatal crashes more than doubled after 1987, compared with what would have been expected if there had been no speed limit increase, rate ratio 2.1 (95% confidence interval (CI), 1.6-2.7). This resulted in an excess of 26.4 deaths per year on rural freeways in Washington State. The total crash rate did not change substantially, rate ratio 1.1 (95% CI, 1.0-1.3). Average vehicle speed increased by 5.5 mph. Speed variance was not affected by the speed limit increase.\n\n\nCONCLUSIONS\nThe speed limit increase was associated with a higher fatal crash rate and more deaths on freeways in Washington State.",
"title": ""
},
{
"docid": "d7573e7b3aac75b49132076ce9fc83e0",
"text": "The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.",
"title": ""
},
{
"docid": "8bb0077bf14426f02a6339dd1be5b7f2",
"text": "Astrocytes are thought to play a variety of key roles in the adult brain, such as their participation in synaptic transmission, in wound healing upon brain injury, and adult neurogenesis. However, to elucidate these functions in vivo has been difficult because of the lack of astrocyte-specific gene targeting. Here we show that the inducible form of Cre (CreERT2) expressed in the locus of the astrocyte-specific glutamate transporter (GLAST) allows precisely timed gene deletion in adult astrocytes as well as radial glial cells at earlier developmental stages. Moreover, postnatal and adult neurogenesis can be targeted at different stages with high efficiency as it originates from astroglial cells. Taken together, this mouse line will allow dissecting the molecular pathways regulating the diverse functions of astrocytes as precursors, support cells, repair cells, and cells involved in neuronal information processing.",
"title": ""
},
{
"docid": "413d0b457cc1b96bf65d8a3e1c98ed41",
"text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "ddacac895fb99d57f2235f963f650e6c",
"text": "Web applications evolved in the last decades from simple scripts to multi-functional applications. Such complex web applications are prone to different types of security vulnerabilities that lead to data leakage or a compromise of the underlying web server. So called secondorder vulnerabilities occur when an attack payload is first stored by the application on the web server and then later on used in a security-critical operation. In this paper, we introduce the first automated static code analysis approach to detect second-order vulnerabilities and related multi-step exploits in web applications. By analyzing reads and writes to memory locations of the web server, we are able to identify unsanitized data flows by connecting input and output points of data in persistent data stores such as databases or session data. As a result, we identified 159 second-order vulnerabilities in six popular web applications such as the conference management systems HotCRP and OpenConf. Moreover, the analysis of web applications evaluated in related work revealed that we are able to detect several critical vulnerabilities previously missed.",
"title": ""
},
{
"docid": "498b9aef490e19842735f32410e809df",
"text": "Human activity recognition using wearable sensors is an area of interest for various domains like healthcare, surveillance etc. Various approaches have been used to solve the problem of activity recognition. Recently deep learning methods like RNNs and LSTMs have been used for this task. But these architectures are unable to capture long term dependencies in time series data. In this work, we propose to use the Temporal Convolutional Network architecture for recognizing the activities from the sensor data obtained from a smartphone. Due to the potential of the architecture to take variable length input sequences along with significantly better ability to capture long term dependencies, it performs better than other deep learning methods. The results of the proposed methods shows an improved performance over the existing methods.",
"title": ""
},
{
"docid": "b38adfeec4e495fdb0fd4cf98b7259a6",
"text": "Task switch cost (the deficit of performing a new task vs. a repeated task) has been partly attributed to priming of the repeated task, as well as to inappropriate preparation for the switched task. In the present study, we examined the nature of the priming effect by repeating stimulus-related processes, such as stimulus encoding or stimulus identification. We adopted a partial-overlap task-switching paradigm, in which only stimulus-related processes should be repeated or switched. The switch cost in this partial-overlap condition was smaller than the cost in the full-overlap condition, in which the task overlap involved more than stimulus processing, indicating that priming of a stimulus is a component of a switch cost. The switch cost in the partial-overlap condition, however, disappeared eventually with a long interval between two tasks, whereas the cost in the full-overlap condition remained significant. Moreover, the switch cost, in general, did not interact with foreknowledge, suggesting that preparation on the basis of foreknowledge may be related to processes beyond stimulus encoding. These results suggest that stimulus-related priming is automatic and short-lived and, therefore, is not a part of the persisting portion of switch cost.",
"title": ""
},
{
"docid": "6f0b8b18689afb9b4ac7466b7898a8e8",
"text": "BACKGROUND\nApproximately 60 million people in the United States live with one of four chronic conditions: heart disease, diabetes, chronic respiratory disease, and major depression. Anxiety and depression are very common comorbidities in COPD and have significant impact on patients, their families, society, and the course of the disease.\n\n\nMETHODS\nWe report the proceedings of a multidisciplinary workshop on anxiety and depression in COPD that aimed to shed light on the current understanding of these comorbidities, and outline unanswered questions and areas of future research needs.\n\n\nRESULTS\nEstimates of prevalence of anxiety and depression in COPD vary widely but are generally higher than those reported in some other advanced chronic diseases. Untreated and undetected anxiety and depressive symptoms may increase physical disability, morbidity, and health-care utilization. Several patient, physician, and system barriers contribute to the underdiagnosis of these disorders in patients with COPD. While few published studies demonstrate that these disorders associated with COPD respond well to appropriate pharmacologic and nonpharmacologic therapy, only a small proportion of COPD patients with these disorders receive effective treatment.\n\n\nCONCLUSION\nFuture research is needed to address the impact, early detection, and management of anxiety and depression in COPD.",
"title": ""
},
{
"docid": "4c711149abc3af05a8e55e52eefddd97",
"text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.",
"title": ""
},
{
"docid": "afaa988666cc6b2790696bbb0d69ff73",
"text": "Despite being one of the most popular tasks in lexical semantics, word similarity has often been limited to the English language. Other languages, even those that are widely spoken such as Spanish, do not have a reliable word similarity evaluation framework. We put forward robust methodologies for the extension of existing English datasets to other languages, both at monolingual and cross-lingual levels. We propose an automatic standardization for the construction of cross-lingual similarity datasets, and provide an evaluation, demonstrating its reliability and robustness. Based on our procedure and taking the RG-65 word similarity dataset as a reference, we release two high-quality Spanish and Farsi (Persian) monolingual datasets, and fifteen cross-lingual datasets for six languages: English, Spanish, French, German, Portuguese, and Farsi.",
"title": ""
},
{
"docid": "bb7511f4137f487b2b8bf2f6f3f73a6a",
"text": "There is extensive evidence indicating that new neurons are generated in the dentate gyrus of the adult mammalian hippocampus, a region of the brain that is important for learning and memory. However, it is not known whether these new neurons become functional, as the methods used to study adult neurogenesis are limited to fixed tissue. We use here a retroviral vector expressing green fluorescent protein that only labels dividing cells, and that can be visualized in live hippocampal slices. We report that newly generated cells in the adult mouse hippocampus have neuronal morphology and can display passive membrane properties, action potentials and functional synaptic inputs similar to those found in mature dentate granule cells. Our findings demonstrate that newly generated cells mature into functional neurons in the adult mammalian brain.",
"title": ""
},
{
"docid": "29d9137c5fdc7e96e140f19acd6dee80",
"text": "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.",
"title": ""
},
{
"docid": "24e78f149b2e42a5c98eb3443c023853",
"text": "Cone-beam CT system has become a hot issue in current CT technique. Compared with the traditional 2D CT, cone beam CT can greatly reduce the scanning time, improve the utilization ratio of X-ray, and enhance the spatial resolution. In the article, simulation data based on the 3D Shepp-Logan Model was obtained by tracing the X-ray and applying the radial attenuation theory. FDK (Feldkamp, Davis and Kress) reconstruction algorithm was then adopted to reconstruct the 3D Shepp-Logan Mode. The reconstruction results indicate that for the central image the spatial resolution can reach 8linepairs/mm. Reconstructed images truthfully reveal the archetype.",
"title": ""
},
{
"docid": "f3aaf555028a0c53bec688c0a8e7e95d",
"text": "ABSTRACT Translating natural language questions to semantic representations such as SPARQL is a core challenge in open-domain question answering over knowledge bases (KB-QA). Existing methods rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Two major shortcomings of such methods are that (i) they require access to a large annotated training set that is not always readily available and (ii) they fail on questions from before-unseen domains. To overcome these limitations, this paper presents NEQA, a continuous learning paradigm for KB-QA. Offline, NEQA automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient. Using a semantic similarity function between questions and by judicious invocation of non-expert user feedback, NEQA learns new templates that capture previously-unseen syntactic structures. This way, NEQA gradually extends its template repository. NEQA periodically re-trains its underlying models, allowing it to adapt to the language used after deployment. Our experiments demonstrate NEQA’s viability, with steady improvement in answering quality over time, and the ability to answer questions from new domains.",
"title": ""
},
{
"docid": "506743f5b2c98d4a885b342584da8b69",
"text": "This thesis presents a general, trainable system for object detection in static images and video sequences. The core system nds a certain class of objects in static images of completely unconstrained, cluttered scenes without using motion, tracking, or handcrafted models and without making any assumptions on the scene structure or the number of objects in the scene. The system uses a set of training data of positive and negative example images as input, transforms the pixel images to a Haar wavelet representation, and uses a support vector machine classi er to learn the di erence between in-class and out-of-class patterns. To detect objects in out-of-sample images, we do a brute force search over all the subwindows in the image. This system is applied to face, people, and car detection with excellent results. For our extensions to video sequences, we augment the core static detection system in several ways { 1) extending the representation to ve frames, 2) implementing an approximation to a Kalman lter, and 3) modeling detections in an image as a density and propagating this density through time according to measured features. In addition, we present a real-time version of the system that is currently running in a DaimlerChrysler experimental vehicle. As part of this thesis, we also present a system that, instead of detecting full patterns, uses a component-based approach. We nd it to be more robust to occlusions, rotations in depth, and severe lighting conditions for people detection than the full body version. We also experiment with various other representations including pixels and principal components and show results that quantify how the number of features, color, and gray-level a ect performance. c Massachusetts Institute of Technology 2000",
"title": ""
}
] |
scidocsrr
|
903867c61520437eae8cc588e0312739
|
New Flexible Silicone-Based EEG Dry Sensor Material Compositions Exhibiting Improvements in Lifespan, Conductivity, and Reliability
|
[
{
"docid": "8415585161d51b500f99aa36650a67d9",
"text": "A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering.",
"title": ""
}
] |
[
{
"docid": "8acf348ea6019eac856b01b0f4012f9c",
"text": "Advanced high-voltage (10 kV-15 kV) silicon carbide (SiC) power MOSFETs described in this paper have the potential to significantly impact the system performance, size, weight, high-temperature reliability, and cost of next-generation energy conversion and transmission systems. In this paper, we report our recently developed 10 kV/20 A SiC MOSFETs with a chip size of 8.1 × 8.1 mm2 and a specific on-resistance (RON, SP) of 100 MΩ-cm2 at 25 °C. We also developed 15 kV/10 A SiC power MOSFETs with a chip size of 8 × 8 mm2 and a RON, SP of 204 mQ cm2 at 25 °C. To our knowledge, this 15 kV SiC MOSFET is the highest voltage rated unipolar power switch. Compared to the commercial 6.5 kV Silicon (Si) IGBTs, these 10 kV and 15 kV SiC MOSFETs exhibit extremely low switching losses even when they are switched at 2-3× higher voltage. The benefits of using these 10 kV and 15 kV SiC MOSFETs include simplifying from multilevel to two-level topology and removing the need for time-interleaving by improving the switching frequency from a few hundred Hz for Si based systems to ≥ 10 kHz for hard-switched SiC based systems.",
"title": ""
},
{
"docid": "becbcb6ca7ac87a3e43dbc65748b258a",
"text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.",
"title": ""
},
{
"docid": "c3b1ad57bab87d796562a771d469b18d",
"text": "The focus of this paper is on one diode photovoltaic cell model. The theory as well as the construction and working of photovoltaic cells using single diode method is also presented. Simulation studies are carried out with different temperatures. Based on this study a conclusion is drawn with comparison with ideal diode. General TermssIn recent years, significant photovoltaic (PV) deployment has occurred, particularly in Germany, Spain and Japan [1]. Also, PV energy is going to become an important source in coming years in Portugal, as it has highest source of sunshine radiation in Europe. Presently the tenth largest PV power plant in the world is in Moura, Portugal, which has an installed capacity of 46 MW and aims to reach 1500 MW of installed capacity by 2020, as stated by the Portuguese National Strategy ENE 2020, multiplying tenfold the existing capacity [2]. The solar cells are basically made of semiconductors which are manufactured using different process. These semiconductors [4]. The intrinsic properties and the incoming solar radiation are responsible for the type of electric energy produced [5]. The solar radiation is composed of photons of different energies, and some are absorbed at the p-n junction. Photons with energies lower than the bandgap of the solar cell are useless and generate no voltage or electric current. Photons with energy superior to the band gap generate electricity, but only the energy corresponding to the band gap is used. The remainder of energy is dissipated as heat in the body of the solar cell [6]. KeywordssPV cell, solar cell, one diode model",
"title": ""
},
{
"docid": "b137e24f41def95c5bb4776de48804ef",
"text": "Adequate sleep is essential for general healthy functioning. This paper reviews recent research on the effects of chronic sleep restriction on neurobehavioral and physiological functioning and discusses implications for health and lifestyle. Restricting sleep below an individual's optimal time in bed (TIB) can cause a range of neurobehavioral deficits, including lapses of attention, slowed working memory, reduced cognitive throughput, depressed mood, and perseveration of thought. Neurobehavioral deficits accumulate across days of partial sleep loss to levels equivalent to those found after 1 to 3 nights of total sleep loss. Recent experiments reveal that following days of chronic restriction of sleep duration below 7 hours per night, significant daytime cognitive dysfunction accumulates to levels comparable to that found after severe acute total sleep deprivation. Additionally, individual variability in neurobehavioral responses to sleep restriction appears to be stable, suggesting a trait-like (possibly genetic) differential vulnerability or compensatory changes in the neurobiological systems involved in cognition. A causal role for reduced sleep duration in adverse health outcomes remains unclear, but laboratory studies of healthy adults subjected to sleep restriction have found adverse effects on endocrine functions, metabolic and inflammatory responses, suggesting that sleep restriction produces physiological consequences that may be unhealthy.",
"title": ""
},
{
"docid": "21324c71d70ca79d2f2c7117c759c915",
"text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.",
"title": ""
},
{
"docid": "feeb51ad0c491c86a6018e92e728c3f0",
"text": "This paper discusses why traditional reinforcement learning methods, and algorithms applied to those models, result in poor performance in situated domains characterized by multiple goals, noisy state, and inconsistent reinforcement. We propose a methodology for designing reinforcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains. The methodology involves the use of heterogeneous reinforcement functions and progress estimators, and applies to learning in domains with a single agent or with multiple agents. The methodology is experimentally validated on a group of mobile robots learning a foraging task.",
"title": ""
},
{
"docid": "10abe464698cf38cce7df46718dfa83c",
"text": "We have developed an approach using Bayesian networks to predict protein-protein interactions genome-wide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., messenger RNAcoexpression, coessentiality, and colocalization). In addition to de novo predictions, it can integrate often noisy, experimental interaction data sets. We observe that at given levels of sensitivity, our predictions are more accurate than the existing high-throughput experimental data sets. We validate our predictions with TAP (tandem affinity purification) tagging experiments. Our analysis, which gives a comprehensive view of yeast interactions, is available at genecensus.org/intint.",
"title": ""
},
{
"docid": "5dddbc2b2c53436c9d2176045118dce5",
"text": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract .",
"title": ""
},
{
"docid": "73b150681d7de50ada8e046a3027085f",
"text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"title": ""
},
{
"docid": "091eedcd69373f99419a745f2215e345",
"text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.",
"title": ""
},
{
"docid": "923b4025d22bc146c53fb4c90f43ef72",
"text": "In this paper we describe preliminary approaches for contentbased recommendation of Pinterest boards to users. We describe our representation and features for Pinterest boards and users, together with a supervised recommendation model. We observe that features based on latent topics lead to better performance than features based on userassigned Pinterest categories. We also find that using social signals (repins, likes, etc.) can improve recommendation quality.",
"title": ""
},
{
"docid": "885764d7e71711b8f9a086d43c6e4f9a",
"text": "In Indian economy, Agriculture is the most important branch and 70 percentage of rural population livelihood depends on agricultural work. Farming is the one of the important part of Agriculture. Crop yield depends on environment’s factors like precipitation, temperature, evapotranspiration, etc. Generally farmers cultivate crop, based on previous experience. But nowadays, the uncertainty increased in environment. So, accurate analysis of historic data of environment parameters should be done for successful farming. To get more harvest, we should also do the analysis of previous cultivation data. The Prediction of crop yield can be done based on historic crop cultivation data and weather data using data mining methods. This paper describes the role of data mining in Agriculture and crop yield prediction. This paper also describes Groundnut crop yield prediction analysis and Naive Bayes Method.",
"title": ""
},
{
"docid": "49802c20c3912143ab371caca7b5c9d5",
"text": "Control theory has recently started to be applied to software engineering domain, mostly for managing the behavior of adaptive software systems under external disturbances. In general terms, the main advantage of control theory is that it can be formally proven that controllers achieve their goals (with certain characteristics), whereas the price to pay is that controllers and system-to-be-controlled have to be modeled by equations. The investigation of how suited are control theory techniques to address performance problems is, however, still at the beginning. In this paper we devise the main challenges behind the adoption of control theory in the context of Software Performance Engineering applied to adaptive software systems.",
"title": ""
},
{
"docid": "0b087e7e36bef7a6d92b8e44bd22047a",
"text": "We investigated whether the dynamics of head and facial movements apart from specific facial expressions communicate affect in infants. Age-appropriate tasks were used to elicit positive and negative affect in 28 ethnically diverse 12-month-old infants. 3D head and facial movements were tracked from 2D video. Strong effects were found for both head and facial movements. For head movement, angular velocity and angular acceleration of pitch, yaw, and roll were higher during negative relative to positive affect. For facial movement, displacement, velocity, and acceleration also increased during negative relative to positive affect. Our results suggest that the dynamics of head and facial movements communicate affect at ages as young as 12 months. These findings deepen our understanding of emotion communication and provide a basis for studying individual differences in emotion in socio-emotional development.",
"title": ""
},
{
"docid": "7b1f880c76d50f9bdec264ac589424c0",
"text": "In the software design, protecting a computer system from a plethora of software attacks or malware in the wild has been increasingly important. One branch of research to detect the existence of attacks or malware, there has been much work focused on modeling the runtime behavior of a program. Stemming from the seminal work of Forrest et al., one of the main tools to model program behavior is system call sequences. Unfortunately, however, since mimicry attacks were proposed, program behavior models based solely on system call sequences could no longer ensure the security of systems and require additional information that comes with its own drawbacks. In this paper, we report our preliminary findings in our research to build a mimicry resilient program behavior model that has lesser drawbacks. We employ branch sequences to harden our program behavior model against mimicry attacks while employing hardware features for efficient extraction of such branch information during program runtime. In order to handle the large scale of branch sequences, we also employ LSTM, the de facto standard in deep learning based sequence modeling and report our preliminary experiments on its interaction with program branch sequences.",
"title": ""
},
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
},
{
"docid": "62132ea78d0b5aa844ff25647159eedb",
"text": "Gate turn offs (GTOs) have an inherent minimum ON-OFF time, which is needed for their safe operation. For GTO-based three-level or neutral-point-clamped (NPC) inverters, this minimum ON-OFF pulsewidth limitation results in a distortion of the output voltage and current waveforms, especially in the low index modulation region. Some approaches have been previously proposed to compensate for the minimum ON pulse. However, these methods increase the inverter switching losses. Two new methods of pulsewidth-modulation (PWM) control based on: 1) adding a bias to the reference voltage of the inverter and 2) switching patterns are presented. The former method improves the output waveforms, but increases the switching losses; while the latter improves the output waveforms without increasing the switching losses. The fluctuations of the neutral-point voltage are also reduced using this method. The theoretical and practical aspects as well as the experimental results are presented in this paper.",
"title": ""
},
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
},
{
"docid": "ef39b902bb50be657b3b9626298da567",
"text": "We consider the problem of node positioning in ad hoc networks. We propose a distributed, infrastructure-free positioning algorithm that does not rely on GPS (Global Positioning System). Instead, the algorithm uses the distances between the nodes to build a relative coordinate system in which the node positions are computed in two dimensions. Despite the distance measurement errors and the motion of the nodes, the algorithm provides sufficient location information and accuracy to support basic network functions. Examples of applications where this algorithm can be used include Location Aided Routing [10] and Geodesic Packet Forwarding [2]. Another example are sensor networks, where mobility is less of a problem. The main contribution of this work is to define and compute relative positions of the nodes in an ad hoc network without using GPS. We further explain how the proposed approach can be applied to wide area ad hoc networks.",
"title": ""
}
] |
scidocsrr
|
73029a1266cec9efb2777e1f915c7c94
|
Predictive positioning and quality of service ridesharing for campus mobility on demand systems
|
[
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
}
] |
[
{
"docid": "a75a8a6a149adf80f6ec65dea2b0ec0d",
"text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.",
"title": ""
},
{
"docid": "387e02e65ff994691ae8ae95b7c7f69c",
"text": "Real world data sets usually have many features, which increases the complexity of data mining task. Feature selection, as a preprocessing step to the data mining, has been shown very effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. To find the optimal feature subsets is the aim of feature selection. Rough sets theory provides a mathematical approach to find optimal feature subset, but this approach is time consuming. In this paper, we propose a novel heuristic algorithm based on rough sets theory to find out the feature subset. This algorithm employs appearing frequency of attribute as heuristic information. Experiment results show in most times our algorithm can find out optimal feature subset quickly and efficiently.",
"title": ""
},
{
"docid": "209ff14abd0b16496af29c143b0fa274",
"text": "Automated text categorization is an important technique for many web applications, such as document indexing, document filtering, and cataloging web resources. Many different approaches have been proposed for the automated text categorization problem. Among them, centroid-based approaches have the advantages of short training time and testing time due to its computational efficiency. As a result, centroid-based classifiers have been widely used in many web applications. However, the accuracy of centroid-based classifiers is inferior to SVM, mainly because centroids found during construction are far from perfect locations.\n We design a fast Class-Feature-Centroid (CFC) classifier for multi-class, single-label text categorization. In CFC, a centroid is built from two important class distributions: inter-class term index and inner-class term index. CFC proposes a novel combination of these indices and employs a denormalized cosine measure to calculate the similarity score between a text vector and a centroid. Experiments on the Reuters-21578 corpus and 20-newsgroup email collection show that CFC consistently outperforms the state-of-the-art SVM classifiers on both micro-F1 and macro-F1 scores. Particularly, CFC is more effective and robust than SVM when data is sparse.",
"title": ""
},
{
"docid": "d54ad1a912a0b174d1f565582c6caf1c",
"text": "This paper presents a new novel design of a smart walker for rehabilitation purpose by patients in hospitals and rehabilitation centers. The design features a full frame walker that provides secured and stable support while being foldable and compact. It also has smart features such as telecommunication and patient activity monitoring.",
"title": ""
},
{
"docid": "a8f86ab8e448fe7e69e988de67668b96",
"text": "Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets.",
"title": ""
},
{
"docid": "a7373d69f5ff9d894a630cc240350818",
"text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.",
"title": ""
},
{
"docid": "b6aa2f8fcbddb651207b4207f676320d",
"text": "Test coverage prediction for board assemblies has an important function in, among others, test engineering, test cost modeling, test strategy definition and product quality estimation. Introducing a method that defines how this coverage is calculated can increase the value of such prediction across the electronics industry. There are three aspects to test coverage calculation: fault modeling, coverage-per-fault and total coverage. An abstraction level for fault categories is introduced, called MPS (material, placement, soldering) that enables us to compare coverage results using different fault models. Additionally, the rule-based fault coverage estimation and the weighted coverage calculation are discussed. This paper was submitted under the ITC Special Board and System Test Call-for-Papers that had an extended due-date. As such, the full text of the paper was not available in time for inclusion in the general volume of the 2003 ITC Proceedings. The full text is available in 2003 ITC Proceedings— Board and System Test Track. ITC INTERNATIONAL TEST CONFERENCE Proceedings of the International Test Conference 2003 (ITC’03) 1089-3539/03 $ 17.00 © 2003 IEEE",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "f355ed837561186cff4e7492470d6ae7",
"text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y",
"title": ""
},
{
"docid": "76262c43c175646d7a00e02a7a49ab81",
"text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.",
"title": ""
},
{
"docid": "e415deac22afd9221995385e681b7f63",
"text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.",
"title": ""
},
{
"docid": "3c999f3104ac98b010a2147c7b8ddaa0",
"text": "Many Big Data technologies were built to enable the processing of human generated data, setting aside the enormous amount of data generated from Machine-to-Machine (M2M) interactions. M2M interactions create real-time data streams that are much more structured, often in the form of series of event occurrences. In this paper, we provide an overview on the main research issues confronted by existing Complex Event Processing (CEP) techniques, as a starting point for Big Data applications that enable the monitoring of complex event occurrences in M2M interactions.",
"title": ""
},
{
"docid": "77a156afb22bbecd37d0db073ef06492",
"text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.",
"title": ""
},
{
"docid": "8308358ee1d9040b3f62b646edcc8578",
"text": "The application of GaN on SiC technology to wideband power amplifier MMICs is explored. The unique characteristics of GaN on SiC applied to reactively matched and distributed wideband circuit topologies are discussed, including comparison to GaAs technology. A 2 – 18 GHz 11W power amplifier MMIC is presented as an example.",
"title": ""
},
{
"docid": "29495e389441ff61d5efad10ad38e995",
"text": "The natural world is infinitely diverse, yet this diversity arises from a relatively small set of coherent properties and rules, such as the laws of physics or chemistry. We conjecture that biological intelligent systems are able to survive within their diverse environments by discovering the regularities that arise from these rules primarily through unsupervised experiences, and representing this knowledge as abstract concepts. Such representations possess useful properties of compositionality and hierarchical organisation, which allow intelligent agents to recombine a finite set of conceptual building blocks into an exponentially large set of useful new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such concepts in the visual domain. We first use the previously published β-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association. Our approach requires very few pairings between symbols and images and makes no assumptions about the choice of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of compositional visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to invent and learn novel visual concepts through recombination of the few learnt concepts.",
"title": ""
},
{
"docid": "12344e450dbfba01476353e38f83358f",
"text": "This paper explores four issues that have emerged from the research on social, cognitive and teaching presence in an online community of inquiry. The early research in the area of online communities of inquiry has raised several issues with regard to the creation and maintenance of social, cognitive and teaching presence that require further research and analysis. The other overarching issue is the methodological validity associated with the community of inquiry framework. The first issue is about shifting social presence from socio-emotional support to a focus on group cohesion (from personal to purposeful relationships). The second issue concerns the progressive development of cognitive presence (inquiry) from exploration to resolution. That is, moving discussion beyond the exploration phase. The third issue has to do with how we conceive of teaching presence (design, facilitation, direct instruction). More specifically, is there an important distinction between facilitation and direct instruction? Finally, the methodological issue concerns qualitative transcript analysis and the validity of the coding protocol.",
"title": ""
},
{
"docid": "9b96a97426917b18dab401423e777b92",
"text": "Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).",
"title": ""
},
{
"docid": "0b12d6a973130f7317956326320ded03",
"text": "We present simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous distribution over R. The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the ‘generalized nearest-neighbor’ graph of the sample and the empirical copula of the sample respectively. For the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is Lipschitz continuous. Experiments demonstrate their usefulness in independent subspace analysis.",
"title": ""
},
{
"docid": "e9ff17015d40f5c6dd5091557f336f43",
"text": "Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight.",
"title": ""
}
] |
scidocsrr
|
5482469ec3f304c0e5052cf269e6e52e
|
Velocity and Acceleration Cones for Kinematic and Dynamic Constraints on Omni-Directional Mobile Robots
|
[
{
"docid": "b09dd4fee4d7cdce61c153a822eadb65",
"text": "A dynamic model is presented for omnidirectional wheeled mobile robots, including wheel/motion surface slip. We derive the dynamics model, experimentally measure friction coefficients, and measure the force to cause slip (to validate our friction model). Dynamic simulation examples are presented to demonstrate omnidirectional motion with slip. After developing an improved friction model, compared to our initial model, the simulation results agree well with experimentally-measured trajectory data with slip. Initially, we thought that only high robot velocity and acceleration governed the resulting slipping motion. However, we learned that the rigid material existing in the discontinuities between omnidirectional wheel rollers plays an equally important role in determining omnidirectional mobile robot dynamic slip motion, even at low rates and accelerations.",
"title": ""
}
] |
[
{
"docid": "62fa4f8712a4fcc1a3a2b6148bd3589b",
"text": "In this paper we discuss the development and application of a large formal ontology to the semantic web. The Suggested Upper Merged Ontology (SUMO) (Niles & Pease, 2001) (SUMO, 2002) is a “starter document” in the IEEE Standard Upper Ontology effort. This upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semantic web.",
"title": ""
},
{
"docid": "c8a2ba8f47266d0a63281a5abb5fa47f",
"text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.",
"title": ""
},
{
"docid": "bfd834ddda77706264fa458302549325",
"text": "Deep learning has emerged as a new methodology with continuous interests in artificial intelligence, and it can be applied in various business fields for better performance. In fashion business, deep learning, especially Convolutional Neural Network (CNN), is used in classification of apparel image. However, apparel classification can be difficult due to various apparel categories and lack of labeled image data for each category. Therefore, we propose to pre-train the GoogLeNet architecture on ImageNet dataset and fine-tune on our fine-grained fashion dataset based on design attributes. This will complement the small size of dataset and reduce the training time. After 10-fold experiments, the average final test accuracy results 62%.",
"title": ""
},
{
"docid": "317b7998eb27384c1655dd9f4dca1787",
"text": "Composite rhytidectomy added the repositioning of the orbicularis oculi muscle to the deep plane face lift to achieve a more harmonious appearance of the face by adding periorbital rejuvenation. By not separating the orbicularis oculi from the zygomaticus minor and by extending the dissection under medial portions of the zygomaticus major and minor muscles, a more significant improvement in composite rhytidectomy can now be achieved. A thin nonrestrictive mesentery between the deep plane face lift dissection and the zygorbicular dissection still allows vertical movement of the composite face lift flap without interrupting the intimate relationship between the platysma, cheek fat, and orbicularis oculi muscle. This modification eliminates the occasional prolonged edema and occasional temporary dystonia previously observed. It allows the continuation of the use of the arcus marginalis release, which has also been modified by resetting the septum orbitale over the orbital rim. These two modifications allow a more predictable and impressive result. They reinforce the concept of periorbital rejuvenation as an integral part of facial rejuvenation, which not only produces a more harmonious immediate result but prevents the possible unfavorable sequelae of conventional rhytidectomy and lower blepharoplasty.",
"title": ""
},
{
"docid": "9b37cc1d96d9a24e500c572fa2cb339a",
"text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.",
"title": ""
},
{
"docid": "02d8c55750904b7f4794139bcfa51693",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "dd92ee7d7f38cda187bfb26e9d4d258b",
"text": "Crowdsourcing” is a relatively recent concept that encompasses many practices. This diversity leads to the blurring of the limits of crowdsourcing that may be identified virtually with any type of Internet-based collaborative activity, such as co-creation or user innovation. Varying definitions of crowdsourcing exist and therefore, some authors present certain specific examples of crowdsourcing as paradigmatic, while others present the same examples as the opposite. In this paper, existing definitions of crowdsourcing are analyzed to extract common elements and to establish the basic characteristics of any crowdsourcing initiative. Based on these existing definitions, an exhaustive and consistent definition for crowdsourcing is presented and contrasted in eleven cases.",
"title": ""
},
{
"docid": "03a55678d5f25f710274323abf71f48c",
"text": "Ontologies are an explicit specification of a conceptualization, that is understood to be an abstract and simplified version of the world to be represented. In recent years, ontologies have been used in Ubiquitous Computing, especially for the development of context-aware applications. In this paper, we offer a taxonomy for classifying ontologies used in Ubiquitous Computing, in which two main categories are distinguished: Domain ontologies, created to represent and communicate agreed knowledge within some sub-domain of Ubiquitous Computing; and Ontologies as software artifacts, when ontologies play the role of an additional type of artifact in ubiquitous computing applications. The latter category is subdivided according with the moment in that ontologies are used: at development time or at run time. Also, we analyze and classify (based on this taxonomy) some recently published works.",
"title": ""
},
{
"docid": "72f3800a072c2844f6ec145788c0749e",
"text": "In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.",
"title": ""
},
{
"docid": "98b603ed5be37165cc22da7650023d7d",
"text": "One reason that word learning presents a challenge for children is because pairings between word forms and meanings are arbitrary conventions that children must learn via observation - e.g., the fact that \"shovel\" labels shovels. The present studies explore cases in which children might bypass observational learning and spontaneously infer new word meanings: By exploiting the fact that many words are flexible and systematically encode multiple, related meanings. For example, words like shovel and hammer are nouns for instruments, and verbs for activities involving those instruments. The present studies explored whether 3- to 5-year-old children possess semantic generalizations about lexical flexibility, and can use these generalizations to infer new word meanings: Upon learning that dax labels an activity involving an instrument, do children spontaneously infer that dax can also label the instrument itself? Across four studies, we show that at least by age four, children spontaneously generalize instrument-activity flexibility to new words. Together, our findings point to a powerful way in which children may build their vocabulary, by leveraging the fact that words are linked to multiple meanings in systematic ways.",
"title": ""
},
{
"docid": "d71040311b8753299377b02023ba5b4c",
"text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"title": ""
},
{
"docid": "cc8b634daad1088aa9f4c43222fab279",
"text": "In this paper, a comparision between the conventional LSTM network and the one-dimensional grid LSTM network applied on single word speech recognition is conducted. The performance of the networks are measured in terms of accuracy and training time. The conventional LSTM model is the current state of the art method to model speech recognition. However, the grid LSTM architecture has proven to be successful in solving other emperical tasks such as translation and handwriting recognition. When implementing the two networks in the same training framework with the same training data of single word audio files, the conventional LSTM network yielded an accuracy rate of 64.8 % while the grid LSTM network yielded an accuracy rate of 65.2 %. Statistically, there was no difference in the accuracy rate between the models. In addition, the conventional LSTM network took 2 % longer to train. However, this difference in training time is considered to be of little significance when tralnslating it to absolute time. Thus, it can be concluded that the one-dimensional grid LSTM model performs just as well as the conventional one.",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "14d9343bbe4ad2dd4c2c27cb5d6795cd",
"text": "In the paper a method of translation applied in a new system TGT is discussed. TGT translates texts written in Polish into corresponding utterances in the Polish sign language. Discussion is focused on text-into-text translation phase. Proper translation is done on the level of a predicative representation of the sentence. The representation is built on the basis of syntactic graph that depicts the composition and mutual connections of syntactic groups, which exist in the sentence and are identified at the syntactic analysis stage. An essential element of translation process is complementing the initial predicative graph with nodes, which correspond to lacking sentence members. The method acts for primitive sentences as well as for compound ones, with some limitations, however. A translation example is given which illustrates main transformations done on the linguistic level. It is complemented by samples of images generated by the animating part of the system.",
"title": ""
},
{
"docid": "2438a082eac9852d3dbcea22aa0402b2",
"text": "Importance\nDietary modification remains key to successful weight loss. Yet, no one dietary strategy is consistently superior to others for the general population. Previous research suggests genotype or insulin-glucose dynamics may modify the effects of diets.\n\n\nObjective\nTo determine the effect of a healthy low-fat (HLF) diet vs a healthy low-carbohydrate (HLC) diet on weight change and if genotype pattern or insulin secretion are related to the dietary effects on weight loss.\n\n\nDesign, Setting, and Participants\nThe Diet Intervention Examining The Factors Interacting with Treatment Success (DIETFITS) randomized clinical trial included 609 adults aged 18 to 50 years without diabetes with a body mass index between 28 and 40. The trial enrollment was from January 29, 2013, through April 14, 2015; the date of final follow-up was May 16, 2016. Participants were randomized to the 12-month HLF or HLC diet. The study also tested whether 3 single-nucleotide polymorphism multilocus genotype responsiveness patterns or insulin secretion (INS-30; blood concentration of insulin 30 minutes after a glucose challenge) were associated with weight loss.\n\n\nInterventions\nHealth educators delivered the behavior modification intervention to HLF (n = 305) and HLC (n = 304) participants via 22 diet-specific small group sessions administered over 12 months. The sessions focused on ways to achieve the lowest fat or carbohydrate intake that could be maintained long-term and emphasized diet quality.\n\n\nMain Outcomes and Measures\nPrimary outcome was 12-month weight change and determination of whether there were significant interactions among diet type and genotype pattern, diet and insulin secretion, and diet and weight loss.\n\n\nResults\nAmong 609 participants randomized (mean age, 40 [SD, 7] years; 57% women; mean body mass index, 33 [SD, 3]; 244 [40%] had a low-fat genotype; 180 [30%] had a low-carbohydrate genotype; mean baseline INS-30, 93 μIU/mL), 481 (79%) completed the trial. In the HLF vs HLC diets, respectively, the mean 12-month macronutrient distributions were 48% vs 30% for carbohydrates, 29% vs 45% for fat, and 21% vs 23% for protein. Weight change at 12 months was -5.3 kg for the HLF diet vs -6.0 kg for the HLC diet (mean between-group difference, 0.7 kg [95% CI, -0.2 to 1.6 kg]). There was no significant diet-genotype pattern interaction (P = .20) or diet-insulin secretion (INS-30) interaction (P = .47) with 12-month weight loss. There were 18 adverse events or serious adverse events that were evenly distributed across the 2 diet groups.\n\n\nConclusions and Relevance\nIn this 12-month weight loss diet study, there was no significant difference in weight change between a healthy low-fat diet vs a healthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss. In the context of these 2 common weight loss diet approaches, neither of the 2 hypothesized predisposing factors was helpful in identifying which diet was better for whom.\n\n\nTrial Registration\nclinicaltrials.gov Identifier: NCT01826591.",
"title": ""
},
{
"docid": "bb43c98d05f3844354862d39f6fa1d2d",
"text": "There are always frustrations for drivers in finding parking spaces and being protected from auto theft. In this paper, to minimize the drivers' hassle and inconvenience, we propose a new intelligent secure privacy-preserving parking scheme through vehicular communications. The proposed scheme is characterized by employing parking lot RSUs to surveil and manage the whole parking lot and is enabled by communication between vehicles and the RSUs. Once vehicles that are equipped with wireless communication devices, which are also known as onboard units, enter the parking lot, the RSUs communicate with them and provide the drivers with real-time parking navigation service, secure intelligent antitheft protection, and friendly parking information dissemination. In addition, the drivers' privacy is not violated. Performance analysis through extensive simulations demonstrates the efficiency and practicality of the proposed scheme.",
"title": ""
},
{
"docid": "bee4d4ba947d87b86abc02852c39d2b3",
"text": "Aim\nThe study assessed the documentation of nursing care before, during and after the Standardized Nursing Language Continuing Education Programme (SNLCEP). It evaluates the differences in documentation of nursing care in different nursing specialty areas and assessed the influence of work experience on the quality of documentation of nursing care with a view to provide information on documentation of nursing care. The instrument used was an adapted scoring guide for nursing diagnosis, nursing intervention and nursing outcome (Q-DIO).\n\n\nDesign\nRetrospective record reviews design was used.\n\n\nMethods\nA total of 270 nursing process booklets formed the sample size. From each ward, 90 booklets were selected in this order: 30 booklets before the SNLCEP, 30 booklets during SNLCEP and 30 booklets after SNLCEP.\n\n\nResults\nOverall, the study concluded that the SNLCEP had a significant effect on the quality of documentation of nursing care using Standardized Nursing Languages.",
"title": ""
},
{
"docid": "938e44b4c03823584d9f9fb9209a9b1e",
"text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.",
"title": ""
},
{
"docid": "fe687739626916780ff22d95cf89f758",
"text": "In this paper, we address the problem of jointly summarizing large sets of Flickr images and YouTube videos. Starting from the intuition that the characteristics of the two media types are different yet complementary, we develop a fast and easily-parallelizable approach for creating not only high-quality video summaries but also novel structural summaries of online images as storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in a form of a branching network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with assistance of videos. For evaluation, we collect the datasets of 20 outdoor activities, consisting of 2.7M Flickr images and 16K YouTube videos. Due to the large-scale nature of our problem, we evaluate our algorithm via crowdsourcing using Amazon Mechanical Turk. In our experiments, we demonstrate that the proposed joint summarization approach outperforms other baselines and our own methods using videos or images only.",
"title": ""
},
{
"docid": "0b61d0ffe709d29e133ead6d6211a003",
"text": "The hypothesis that Enterococcus faecalis resists common intracanal medications by forming biofilms was tested. E. faecalis colonization of 46 extracted, medicated roots was observed with scanning electron microscopy (SEM) and scanning confocal laser microscopy. SEM detected colonization of root canals medicated with calcium hydroxide points and the positive control within 2 days. SEM detected biofilms in canals medicated with calcium hydroxide paste in an average of 77 days. Scanning confocal laser microscopy analysis of two calcium hydroxide paste medicated roots showed viable colonies forming in a root canal infected for 86 days, whereas in a canal infected for 160 days, a mushroom-shape typical of a biofilm was observed. Analysis by sodium dodecyl sulfate polyacrylamide gel electrophoresis showed no differences between the protein profiles of bacteria in free-floating (planktonic) and inoculum cultures. Analysis of biofilm bacteria was inconclusive. These observations support potential E. faecalis biofilm formation in vivo in medicated root canals.",
"title": ""
}
] |
scidocsrr
|
39ec7fb96995c0800bc415c55d78a670
|
Variables associated with achievement in higher education: A systematic review of meta-analyses.
|
[
{
"docid": "4147b26531ca1ec165735688481d2684",
"text": "Problem-based approaches to learning have a long history of advocating experience-based education. Psychological research and theory suggests that by having students learn through the experience of solving problems, they can learn both content and thinking strategies. Problem-based learning (PBL) is an instructional method in which students learn through facilitated problem solving. In PBL, student learning centers on a complex problem that does not have a single correct answer. Students work in collaborative groups to identify what they need to learn in order to solve a problem. They engage in self-directed learning (SDL) and then apply their new knowledge to the problem and reflect on what they learned and the effectiveness of the strategies employed. The teacher acts to facilitate the learning process rather than to provide knowledge. The goals of PBL include helping students develop 1) flexible knowledge, 2) effective problem-solving skills, 3) SDL skills, 4) effective collaboration skills, and 5) intrinsic motivation. This article discusses the nature of learning in PBL and examines the empirical evidence supporting it. There is considerable research on the first 3 goals of PBL but little on the last 2. Moreover, minimal research has been conducted outside medical and gifted education. Understanding how these goals are achieved with less skilled learners is an important part of a research agenda for PBL. The evidence suggests that PBL is an instructional approach that offers the potential to help students develop flexible understanding and lifelong learning skills.",
"title": ""
},
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
}
] |
[
{
"docid": "2d9d6dbe1d841b9a87284c6a736bcb0c",
"text": "The loosely defined terms hard fork and soft fork have established themselves as descriptors of different classes of upgrade mechanisms for the underlying consensus rules of (proof-of-work) blockchains. Recently, a novel approach termed velvet fork, which expands upon the concept of a soft fork, was outlined in [22]. Specifically, velvet forks intend to avoid the possibility of disagreement by a change of rules through rendering modifications to the protocol backward compatible and inclusive to legacy blocks. We present an overview and definitions of these different upgrade mechanisms and outline their relationships. Hereby, we expose examples where velvet forks or similar constructions are already actively employed in Bitcoin and other cryptocurrencies. Furthermore, we expand upon the concept of velvet forks by proposing possible applications and discuss potentially arising security implications.",
"title": ""
},
{
"docid": "fcbc3b91c6cd501ddbfed2f93e65e73d",
"text": "Question answering is an important and difficult task in the natural language processing domain, because many basic natural language processing tasks can be cast into a question answering task. Several deep neural network architectures have been developed recently, which employ memory and inference components to memorize and reason over text information, and generate answers to questions. However, a major drawback of many such models is that they are capable of only generating single-word answers. In addition, they require large amount of training data to generate accurate answers. In this paper, we introduce the LongTerm Memory Network (LTMN), which incorporates both an external memory module and a Long Short-Term Memory (LSTM) module to comprehend the input data and generate multi-word answers. The LTMN model can be trained end-to-end using back-propagation and requires minimal supervision. We test our model on two synthetic data sets (based on Facebook’s bAbI data set) and the real-world Stanford question answering data set, and show that it can achieve state-of-the-art performance.",
"title": ""
},
{
"docid": "098625ba59c97d704ae85aa2e6776919",
"text": "A CDTA-based quadrature oscillator circuit is proposed. The circuit employs two current-mode allpass sections in a loop, and provides high-frequency sinusoidal oscillations in quadrature at high impedance output terminals of the CDTAs. The circuit has no floating capacitors, which is advantageous from the integrated circuit manufacturing point of view. Moreover, the oscillation frequency of this configuration can be made adjustable by using voltage controlled elements (MOSFETs), since the resistors in the circuit are either grounded or virtually grounded.",
"title": ""
},
{
"docid": "30a5bfd8afce6ba1f8259a51773c8be7",
"text": "Objectives The aim of this audit was to monitor the outcome of composite restorations placed at an increased vertical dimension in patients with severe tooth wear.Methods This convenience sample of patients were treated by 11 specialist trainees in prosthodontics, and restored with direct composites. Exclusion criteria included bruxism, poor medical health and a preference for monitoring rather than intervention. The restorations were placed between 2012 and 2016 and were placed over more than one appointment and the outcome monitored for up to 14 months. Failure was assessed at a binary level, either success or failure (minor or major).Results A total of 35 patients with a mean age of 45 years (range 24–86), 27 of whom were male, received 251 restorations placed from November 2012 to November 2016. The patients had a mean of 11.51 (range 4 to 16) occluding pairs of teeth. There was a total of 40 restoration failures (17%) which was an 83% success rate based on the total number of restorations. For the patient-based data, 14 patients (39%) had no chips or bulk factures while 22 (61%) patients had failures, of which 60% were chips and 40% bulk fractures.Conclusion Restoration of worn teeth with composites is associated with a high incidence of fractures.Clinical significance The restoration of worn teeth with composite can involve regular maintenance following fractures and patients need to be aware of this when giving consent.",
"title": ""
},
{
"docid": "984dc75b97243e448696f2bf0ba3c2aa",
"text": "Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f835074be8ff74361f1ea700ae737ace",
"text": "Exploring community is fundamental for uncovering the connections between structure and function of complex networks and for practical applications in many disciplines such as biology and sociology. In this paper, we propose a TTR-LDA-Community model which combines the Latent Dirichlet Allocation model (LDA) and the Girvan-Newman community detection algorithm with an inference mechanism. The model is then applied to data from Delicious, a popular social tagging system, over the time period of 2005-2008. Our results show that 1) users in the same community tend to be interested in similar set of topics in all time periods; and 2) topics may divide into several sub-topics and scatter into different communities over time. We evaluate the effectiveness of our model and show that the TTR-LDA-Community model is meaningful for understanding communities and outperforms TTR-LDA and LDA models in tag prediction.",
"title": ""
},
{
"docid": "629c6c7ca3db9e7cad2572c319ec52f0",
"text": "Recent research on pornography suggests that perception of addiction predicts negative outcomes above and beyond pornography use. Research has also suggested that religious individuals are more likely to perceive themselves to be addicted to pornography, regardless of how often they are actually using pornography. Using a sample of 686 unmarried adults, this study reconciles and expands on previous research by testing perceived addiction to pornography as a mediator between religiosity and relationship anxiety surrounding pornography. Results revealed that pornography use and religiosity were weakly associated with higher relationship anxiety surrounding pornography use, whereas perception of pornography addiction was highly associated with relationship anxiety surrounding pornography use. However, when perception of pornography addiction was inserted as a mediator in a structural equation model, pornography use had a small indirect effect on relationship anxiety surrounding pornography use, and perception of pornography addiction partially mediated the association between religiosity and relationship anxiety surrounding pornography use. By understanding how pornography use, religiosity, and perceived pornography addiction connect to relationship anxiety surrounding pornography use in the early relationship formation stages, we hope to improve the chances of couples successfully addressing the subject of pornography and mitigate difficulties in romantic relationships.",
"title": ""
},
{
"docid": "00dbe58bcb7d4415c01a07255ab7f365",
"text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.",
"title": ""
},
{
"docid": "da7058526e9b76988e20dae598124c53",
"text": "53BP1 is known as a mediator in DNA damage response and a regulator of DNA double-stranded breaks (DSBs) repair. 53BP1 was recently reported to be a centrosomal protein and a binding partner of mitotic polo-like kinase 1 (Plk1). The stability of 53BP1, in response to DSBs, is regulated by its phosphorylation, deubiquitination, and ubiquitination. During mitosis, 53BP1 is stabilized by phosphorylation at S380, a putative binding region with polo-box domain of Plk1, and deubiquitination by ubiquitin-specific protease 7 (USP7). In the absence of DSBs, 53BP1 is abundant in the nucleoplasm; DSB formation results in its rapid localization to the damaged chromatin. Mitotic 53BP1 is also localized at the centrosome and spindle pole. 53BP1 depletion induces mitotic defects such as disorientation of spindle poles attributed to extra centrosomes or mispositioning of centrosomes, leading to phenotypes similar to those in USP7-deficient cells. Here, we discuss how 53BP1 controls the centrosomal integrity through its interaction with USP7 and centromere protein F by regulation of its stability and its physiology in response to DNA damage.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "a41bc49e1207460facc5a43190849dca",
"text": "Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. Humans often describe their experiences through the event, temporal and causal structures they perceive. These structures are often expressed in textual forms, for example in timelines, where text is summarized by aligning events with the times at which they occurred. These same sorts of temporal-causal structures are also useful for a variety of computational tasks, like summarization and question answering. However, to reason over such structures they must first be extracted from their textual representations and organized into a machine readable form. This work demonstrates that various important parts of the event, temporal and causal structure of a text can be extracted automatically using machine learning methods. Events, which serve as the basic anchors of temporal and causal relations, can be extracted with F-measures in the 70s and 80s using a word-chunking approach. Temporal relations between adjacent events in some common syntactic constructions can be identified with almost 90% accuracy using pair-wise classification. Causal relations are much more difficult, but initial work suggests that even this task may become tractable to machine learning methods. Analyses of the various tasks lead to several conclusions about how best to approach the automatic extraction of temporal-causal structure. Tasks with little linguistic motivation had low agreement between humans and low machine learning model performance. Tasks with clear annotation guidelines based on known linguistic constructions had much higher inter-annotator agreement and much better model performance. Thus, future progress will depend on careful task selection guided by linguistic knowledge about how event, temporal and causal relations are expressed in text. Acknowledgements My deepest thanks to: My family, for a variety of emotional support, and for putting up with research terms that all too frequently slipped into otherwise pleasant conversations. Jim Martin, who has offered not just advice, but an opportunity to develop ideas together. I can't even count the times that I walked into his office with only a vague idea in my head, and walked out with a plan and several experiments to run. Matthew Woitaszek, for always being available for a random conversation about research problems, for serving as a great board to bounce ideas off of, and for being the thankless maintainer of the cluster on which many …",
"title": ""
},
{
"docid": "c590b5f84b08720b36622a0256565613",
"text": "Attempto Controlled English (ACE) allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specification texts in ACE into discourse representation structures and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.",
"title": ""
},
{
"docid": "94ce7e37a8a1cdfb73b7f3b5b4a4bbdf",
"text": "Thermal protection limits are equally important as mechanical specifications when designing electric drivetrains. However, properties of motor drives like mass/length of copper winding or heat dissipation factor are not available in producers’ catalogs. The lack of this essential data prevents the effective selection of drivetrain components and makes it necessary to consult critical design decisions with equipment's suppliers. Therefore, in this paper, the popular loadability curves that are available in catalogs become a basis to formulate a method that allows to estimate temperature rise of motor drives. The current technique allows for evaluating a temperature rise of a motor drive for any overload magnitude, duty cycle, and ambient temperature, contrary to using a discrete set of permissible overload conditions that are provided by manufacturers. The proposed approach is based on industrially adopted practices, greatly improves flexibility of a design process, and facilitates communication in a supplier–customer dialog.",
"title": ""
},
{
"docid": "b15793d40986da868efde0074d5fbfc9",
"text": "Recently, cellular operators have started migrating to IPv6 in response to the increasing demand for IP addresses. With the introduction of IPv6, cellular middleboxes, such as firewalls for preventing malicious traffic from the Internet and stateful NAT64 boxes for providing backward compatibility with legacy IPv4 services, have become crucial to maintain stability of cellular networks. This paper presents security problems of the currently deployed IPv6 middleboxes of five major operators. To this end, we first investigate several key features of the current IPv6 deployment that can harm the safety of a cellular network as well as its customers. These features combined with the currently deployed IPv6 middlebox allow an adversary to launch six different attacks. First, firewalls in IPv6 cellular networks fail to block incoming packets properly. Thus, an adversary could fingerprint cellular devices with scanning, and further, she could launch denial-of-service or over-billing attacks. Second, vulnerabilities in the stateful NAT64 box, a middlebox that maps an IPv6 address to an IPv4 address (and vice versa), allow an adversary to launch three different attacks: 1) NAT overflow attack that allows an adversary to overflow the NAT resources, 2) NAT wiping attack that removes active NAT mappings by exploiting the lack of TCP sequence number verification of firewalls, and 3) NAT bricking attack that targets services adopting IP-based blacklisting by preventing the shared external IPv4 address from accessing the service. We confirmed the feasibility of these attacks with an empirical analysis. We also propose effective countermeasures for each attack.",
"title": ""
},
{
"docid": "168f2c2b4e8bc52debf81eb800860cae",
"text": "Optimal reconfigurable hardware implementations may require the use of arbitrary floating-point formats that do not necessarily conform to IEEE specified sizes. We present a variable precision floating-point library (VFloat) that supports general floating-point formats including IEEE standard formats. Most previously published floating-point formats for use with reconfigurable hardware are subsets of our format. Custom datapaths with optimal bitwidths for each operation can be built using the variable precision hardware modules in the VFloat library, enabling a higher level of parallelism. The VFloat library includes three types of hardware modules for format control, arithmetic operations, and conversions between fixed-point and floating-point formats. The format conversions allow for hybrid fixed- and floating-point operations in a single design. This gives the designer control over a large number of design possibilities including format as well as number range within the same application. In this article, we give an overview of the components in the VFloat library and demonstrate their use in an implementation of the K-means clustering algorithm applied to multispectral satellite images.",
"title": ""
},
{
"docid": "ae0d63126ff55961533dc817554bcb82",
"text": "This paper presents a novel bipedal robot concept and prototype that takes inspiration from humanoids but features fundamental differences that drastically improve its agility and stability while reducing its complexity and cost. This Non-Anthropomorphic Bipedal Robotic System (NABiRoS) modifies the traditional bipedal form by aligning the legs in the sagittal plane and adding a compliance to the feet. The platform is comparable in height to a human, but weighs much less because of its lightweight architecture and novel leg configuration. The inclusion of the compliant element showed immense improvements in the stability and robustness of walking gaits on the prototype, allowing the robot to remain stable during locomotion without any inertial feedback control. NABiRoS was able to achieve walking speeds of up to 0.75km/h (0.21m/s) using a simple pre-processed ZMP based gait and a positioning accuracy of +/- 0.04m with a preprocessed quasi-static algorithm.",
"title": ""
},
{
"docid": "529ee26c337908488a5912835cc966c3",
"text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.",
"title": ""
},
{
"docid": "8adf698c03f01dced7d021cc103d51a4",
"text": "Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation in order to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to real-world ones. Using domain adaptation methods to cross this “reality gap” requires at best a large amount of unlabelled real-world data, whilst domain randomization alone can waste modeling power, rendering certain reinforcement learning (RL) methods unable to learn the task of interest. In this paper, we present Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zeroshot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, outperforming a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.",
"title": ""
}
] |
scidocsrr
|
c57a6eba91b8a580c51507bdbde2f9c2
|
Attitude estimation and control of a quadrocopter
|
[
{
"docid": "adc9e237e2ca2467a85f54011b688378",
"text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.",
"title": ""
}
] |
[
{
"docid": "27bd0bccf28931032558596dd4d8c2d3",
"text": "We address the problem of classification in partially labeled networks (a.k.a. within-network classification) where observed class labels are sparse. Techniques for statistical relational learning have been shown to perform well on network classification tasks by exploiting dependencies between class labels of neighboring nodes. However, relational classifiers can fail when unlabeled nodes have too few labeled neighbors to support learning (during training phase) and/or inference (during testing phase). This situation arises in real-world problems when observed labels are sparse.\n In this paper, we propose a novel approach to within-network classification that combines aspects of statistical relational learning and semi-supervised learning to improve classification performance in sparse networks. Our approach works by adding \"ghost edges\" to a network, which enable the flow of information from labeled to unlabeled nodes. Through experiments on real-world data sets, we demonstrate that our approach performs well across a range of conditions where existing approaches, such as collective classification and semi-supervised learning, fail. On all tasks, our approach improves area under the ROC curve (AUC) by up to 15 points over existing approaches. Furthermore, we demonstrate that our approach runs in time proportional to L • E, where L is the number of labeled nodes and E is the number of edges.",
"title": ""
},
{
"docid": "93a6c94a3ecb3fcaf363b07c077e5579",
"text": "The state-of-the-art advancement in wind turbine condition monitoring and fault diagnosis for the recent several years is reviewed. Since the existing surveys on wind turbine condition monitoring cover the literatures up to 2006, this review aims to report the most recent advances in the past three years, with primary focus on gearbox and bearing, rotor and blades, generator and power electronics, as well as system-wise turbine diagnosis. There are several major trends observed through the survey. Due to the variable-speed nature of wind turbine operation and the unsteady load involved, time-frequency analysis tools such as wavelets have been accepted as a key signal processing tool for such application. Acoustic emission has lately gained much more attention in order to detect incipient failures because of the low-speed operation for wind turbines. There has been an increasing trend of developing model based reasoning algorithms for fault detection and isolation as cost-effective approach for wind turbines as relatively complicated system. The impact of unsteady aerodynamic load on the robustness of diagnostic signatures has been notified. Decoupling the wind load from condition monitoring decision making will reduce the associated down-time cost.",
"title": ""
},
{
"docid": "f7a42937973a45ed4fb5d23e3be316a9",
"text": "Domain specific information retrieval process has been a prominent and ongoing research in the field of natural language processing. Many researchers have incorporated different techniques to overcome the technical and domain specificity and provide a mature model for various domains of interest. The main bottleneck in these studies is the heavy coupling of domain experts, that makes the entire process to be time consuming and cumbersome. In this study, we have developed three novel models which are compared against a golden standard generated via the on line repositories provided, specifically for the legal domain. The three different models incorporated vector space representations of the legal domain, where document vector generation was done in two different mechanisms and as an ensemble of the above two. This study contains the research being carried out in the process of representing legal case documents into different vector spaces, whilst incorporating semantic word measures and natural language processing techniques. The ensemble model built in this study, shows a significantly higher accuracy level, which indeed proves the need for incorporation of domain specific semantic similarity measures into the information retrieval process. This study also shows, the impact of varying distribution of the word similarity measures, against varying document vector dimensions, which can lead to improvements in the process of legal information retrieval. keywords: Document Embedding, Deep Learning, Information Retrieval",
"title": ""
},
{
"docid": "446c1bf541dbed56f8321b8024391b8c",
"text": "Tokenisation has been adopted by the payment industry as a method to prevent Personal Account Number (PAN) compromise in EMV (Europay MasterCard Visa) transactions. The current architecture specified in EMV tokenisation requires online connectivity during transactions. However, it is not always possible to have online connectivity. We identify three main scenarios where fully offline transaction capability is considered to be beneficial for both merchants and consumers. Scenarios include making purchases in locations without online connectivity, when a reliable connection is not guaranteed, and when it is cheaper to carry out offline transactions due to higher communication/payment processing costs involved in online approvals. In this study, an offline contactless mobile payment protocol based on EMV tokenisation is proposed. The aim of the protocol is to address the challenge of providing secure offline transaction capability when there is no online connectivity on either the mobile or the terminal. The solution also provides end-to-end encryption to provide additional security for transaction data other than the token. The protocol is analysed against protocol objectives and we discuss how the protocol can be extended to prevent token relay attacks. The proposed solution is subjected to mechanical formal analysis using Scyther. Finally, we implement the protocol and obtain performance measurements.",
"title": ""
},
{
"docid": "4bb2741e663c6cf85adf3bf77226ac92",
"text": "Fresh water and arable land are essential for agricultural production and food processing. However, managing conflicting demands over water and land can be challenging for business leaders, environmentalists and other stakeholders. This paper characterizes these challenges as wicked problems. Wicked problems are ill-formed, fuzzy, and messy, because they involve many clients and decisions makers with conflicting values. They are also not solvable, but rather must be managed. How can agribusiness leaders effectively manage wicked problems, especially if they have little practice in doing so? This paper argues that a Community of Practice (CoP) and its tripartite elements of domain, community and practice can be effective in helping businesses manage wicked problems by focusing on the positive links between environmental stewardship and economic performance. Empirically, the paper examines three agribusinesses to assess the extent in which CoP is used as a strategy for sustainable water management.",
"title": ""
},
{
"docid": "125c145b143579528279e76d23fa3054",
"text": "Social unrest is endemic in many societies, and recent news has drawn attention to happenings in Latin America, the Middle East, and Eastern Europe. Civilian populations mobilize, sometimes spontaneously and sometimes in an organized manner, to raise awareness of key issues or to demand changes in governing or other organizational structures. It is of key interest to social scientists and policy makers to forecast civil unrest using indicators observed on media such as Twitter, news, and blogs. We present an event forecasting model using a notion of activity cascades in Twitter (proposed by Gonzalez-Bailon et al., 2011) to predict the occurrence of protests in three countries of Latin America: Brazil, Mexico, and Venezuela. The basic assumption is that the emergence of a suitably detected activity cascade is a precursor or a surrogate to a real protest event that will happen \"on the ground.\" Our model supports the theoretical characterization of large cascades using spectral properties and uses properties of detected cascades to forecast events. Experimental results on many datasets, including the recent June 2013 protests in Brazil, demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b7a3a7af3495d0a722040201f5fadd55",
"text": "During the last decade, biodegradable metallic stents have been developed and investigated as alternatives for the currently-used permanent cardiovascular stents. Degradable metallic materials could potentially replace corrosion-resistant metals currently used for stent application as it has been shown that the role of stenting is temporary and limited to a period of 6-12 months after implantation during which arterial remodeling and healing occur. Although corrosion is generally considered as a failure in metallurgy, the corrodibility of certain metals can be an advantage for their application as degradable implants. The candidate materials for such application should have mechanical properties ideally close to those of 316L stainless steel which is the gold standard material for stent application in order to provide mechanical support to diseased arteries. Non-toxicity of the metal itself and its degradation products is another requirement as the material is absorbed by blood and cells. Based on the mentioned requirements, iron-based and magnesium-based alloys have been the investigated candidates for biodegradable stents. This article reviews the recent developments in the design and evaluation of metallic materials for biodegradable stents. It also introduces the new metallurgical processes which could be applied for the production of metallic biodegradable stents and their effect on the properties of the produced metals.",
"title": ""
},
{
"docid": "f15f72e8b513b0a9b7ddb9b73a559571",
"text": "Teenagers are among the most prolific users of social network sites (SNS). Emerging studies find that youth spend a considerable portion of their daily life interacting through social media. Subsequently, questions and controversies emerge about the effects SNS have on adolescent development. This review outlines the theoretical frameworks researchers have used to understand adolescents and SNS. It brings together work from disparate fields that examine the relationship between SNS and social capital, privacy, youth safety, psychological well-being, and educational achievement.These research strands speak to high-profile concerns and controversies that surround youth participation in these online communities, and offer ripe areas for future research.",
"title": ""
},
{
"docid": "d4a51def80ebbb09cca88b98fbdcfdfb",
"text": "A central tenet underlying the use of plant preparations is that herbs contain many bioactive compounds. Cannabis contains tetrahydrocannabinols (THC) a primary metabolite with reported psychotropic effects. Therefore, the presence of THC makes controversial the use of Cannabis to treat diseases by which their uses and applications were limited. The question then is: is it possible to use the extracts from Cannabis to treat the diseases related with it use in folk medicine? More recently, the synergistic contributions of bioactive constituents have been scientifically demonstrated. We reviewed the literature concerning medical cannabis and its secondary metabolites, including fraction and total extracts. Scientific evidence shows that secondary metabolites in cannabis may enhance the positive effects of THC a primary metabolite. Other chemical components (cannabinoid and non-cannabinoid) in cannabis or its extracts may reduce THC-induced anxiety, cholinergic deficits, and immunosuppression; which could increase its therapeutic potential. Particular attention will be placed on noncannabinoid compounds interactions that could produce synergy with respect to treatment of pain, inflammation, epilepsy, fungal and bacterial infections. The evidence accessible herein pointed out for the possible synergism that might occur involving the main phytocompounds with each other or with other minor components.",
"title": ""
},
{
"docid": "81ef390009fb64bf235147bc0e186bab",
"text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.",
"title": ""
},
{
"docid": "5fb640a9081f72fcf994b1691470d7bc",
"text": "Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem.",
"title": ""
},
{
"docid": "2803bbd080e761349cffd9ba5d5ec274",
"text": "BACKGROUND\nSeveral triage systems have been developed for use in the emergency department (ED), however they are not designed to detect deterioration in patients. Deteriorating patients may be at risk of going undetected during their ED stay and are therefore vulnerable to develop serious adverse events (SAEs). The national early warning score (NEWS) has a good ability to discriminate ward patients at risk of SAEs. The utility of NEWS had not yet been studied in an ED.\n\n\nOBJECTIVE\nTo explore the performance of the NEWS in an ED with regard to predicting adverse outcomes.\n\n\nDESIGN\nA prospective observational study. Patients Eligible patients were those presenting to the ED during the 6 week study period with an Emergency Severity Index (ESI) of 2 and 3 not triaged to the resuscitation room.\n\n\nINTERVENTION\nNEWS was documented at three time points: on arrival (T0), hour after arrival (T1) and at transfer to the general ward/ICU (T2). The outcomes of interest were: hospital admission, ICU admission, length of stay and 30 day mortality.\n\n\nRESULTS\nA total of 300 patients were assessed for eligibility. Complete data was able to be collected for 274 patients on arrival at the ED. NEWS was significantly correlated with patient outcomes, including 30 day mortality, hospital admission, and length of stay at all-time points.\n\n\nCONCLUSION\nThe NEWS measured at different time points was a good predictor of patient outcomes and can be of additional value in the ED to longitudinally monitor patients throughout their stay in the ED and in the hospital.",
"title": ""
},
{
"docid": "ea739d96ee0558fb23f0a5a020b92822",
"text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.",
"title": ""
},
{
"docid": "1ab4f605d67dabd3b2815a39b6123aa4",
"text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "a26ca28fb8e67e8ce74cc8589a5116ca",
"text": "Recently, there has been a growing interest in using online technologies to design protocols for secure electronic voting. The main challenges include vote privacy and anonymity, ballot irrevocability and transparency throughout the vote counting process. The introduction of the blockchain as a basis for cryptocurrency protocols, provides for the exploitation of the immutability and transparency properties of these distributed ledgers.\n In this paper, we discuss possible uses of the blockchain technology to implement a secure and fair voting system. In particular, we introduce a secret share-based voting system on the blockchain, the so-called SHARVOT protocol1. Our solution uses Shamir's Secret Sharing to enable on-chain, i.e. within the transactions script, votes submission and winning candidate determination. The protocol is also using a shuffling technique, Circle Shuffle, to de-link voters from their submissions.",
"title": ""
},
{
"docid": "d40aa76e76c44da4c6237f654dcdab45",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "6ed9425f8d5be786cce530b45f22cd00",
"text": "This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.",
"title": ""
},
{
"docid": "2831276f8c6141db0c1ef8f41e125efc",
"text": "Research on event detection in Twitter is often obstructed by the lack of publicly-available evaluation mechanisms such as test collections; this problem is more severe when considering the scarcity of them in languages other than English. In this paper, we present EveTAR, the first publicly-available test collection for event detection in Arabic tweets. The collection includes a crawl of 590M Arabic tweets posted in a month period and covers 66 significant events (in 8 different categories) for which more than 134k relevance judgments were gathered using crowdsourcing with high average inter-annotator agreement (Kappa value of 0.6). We demonstrate the usability of the collection by evaluating 3 state-of-the-art event detection algorithms. The collection is also designed to support other retrieval tasks, as we show in our experiments with ad-hoc search systems.",
"title": ""
},
{
"docid": "7a4bf293b22a405c4b3c41a914bc7f3f",
"text": "Sutton, Szepesvári and Maei (2009) recently introduced the first temporal-difference learning algorithm compatible with both linear function approximation and off-policy training, and whose complexity scales only linearly in the size of the function approximator. Although their gradient temporal difference (GTD) algorithm converges reliably, it can be very slow compared to conventional linear TD (on on-policy problems where TD is convergent), calling into question its practical utility. In this paper we introduce two new related algorithms with better convergence rates. The first algorithm, GTD2, is derived and proved convergent just as GTD was, but uses a different objective function and converges significantly faster (but still not as fast as conventional TD). The second new algorithm, linear TD with gradient correction, or TDC, uses the same update rule as conventional TD except for an additional term which is initially zero. In our experiments on small test problems and in a Computer Go application with a million features, the learning rate of this algorithm was comparable to that of conventional TD. This algorithm appears to extend linear TD to off-policy learning with no penalty in performance while only doubling computational requirements.",
"title": ""
}
] |
scidocsrr
|
a1a91a598d7b604d5f69f20319a077d0
|
Developing Supply Chains in Disaster Relief Operations through Cross-sector Socially Oriented Collaborations : A Theoretical Model
|
[
{
"docid": "978c1712bf6b469059218697ea552524",
"text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
}
] |
[
{
"docid": "eff844ffdf2ef5408e23d98564d540f0",
"text": "The motions of wheeled mobile robots are largely governed by contact forces between the wheels and the terrain. Inasmuch as future wheel-terrain interactions are unpredictable and unobservable, high performance autonomous vehicles must ultimately learn the terrain by feel and extrapolate, just as humans do. We present an approach to the automatic calibration of dynamic models of arbitrary wheeled mobile robots on arbitrary terrain. Inputs beyond our control (disturbances) are assumed to be responsible for observed differences between what the vehicle was initially predicted to do and what it was subsequently observed to do. In departure from much previous work, and in order to directly support adaptive and predictive controllers, we concentrate on the problem of predicting candidate trajectories rather than measuring the current slip. The approach linearizes the nominal vehicle model and then calibrates the perturbative dynamics to explain the observed prediction residuals. Both systematic and stochastic disturbances are used, and we model these disturbances as functions over the terrain, the velocities, and the applied inertial and gravitational forces. In this way, we produce a model which can be used to predict behavior across all of state space for arbitrary terrain geometry. Results demonstrate that the approach converges quickly and produces marked improvements in the prediction of trajectories for multiple vehicle classes throughout the performance envelope of the platform, including during aggressive maneuvering.",
"title": ""
},
{
"docid": "43e39433013ca845703af053e5ef9e11",
"text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.",
"title": ""
},
{
"docid": "4a3496a835d3948299173b4b2767d049",
"text": "We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "e30d6fd14f091e188e6a6b86b6286609",
"text": "Assessing the spatio-temporal variations of surface water quality is important for water environment management. In this study, surface water samples are collected from 2008 to 2015 at 17 stations in the Ying River basin in China. The two pollutants i.e. chemical oxygen demand (COD) and ammonia nitrogen (NH3-N) are analyzed to characterize the river water quality. Cluster analysis and the seasonal Kendall test are used to detect the seasonal and inter-annual variations in the dataset, while the Moran's index is utilized to understand the spatial autocorrelation of the variables. The influence of natural factors such as hydrological regime, water temperature and etc., and anthropogenic activities with respect to land use and pollutant load are considered as driving factors to understand the water quality evolution. The results of cluster analysis present three groups according to the similarity in seasonal pattern of water quality. The trend analysis indicates an improvement in water quality during the dry seasons at most of the stations. Further, the spatial autocorrelation of water quality shows great difference between the dry and wet seasons due to sluices and dams regulation and local nonpoint source pollution. The seasonal variation in water quality is found associated with the climatic factors (hydrological and biochemical processes) and flow regulation. The analysis of land use indicates a good explanation for spatial distribution and seasonality of COD at the sub-catchment scale. Our results suggest that an integrated water quality measures including city sewage treatment, agricultural diffuse pollution control as well as joint scientific operations of river projects is needed for an effective water quality management in the Ying River basin.",
"title": ""
},
{
"docid": "6e5e6b361d113fa68b2ca152fbf5b194",
"text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.",
"title": ""
},
{
"docid": "c04ae9e3721f23b8b0a5b8306c25becb",
"text": "A transmission-line model is developed for predicting the response of a twisted-wire pair (TWP) circuit in the presence of a ground plane, illuminated by a plane-wave electromagnetic field. The twisted pair is modeled as an ideal bifilar helix, the total coupling is separated into differential- (DM) and common-mode (CM) contributions, and closed-form expressions are derived for the equivalent induced sources. Approximate upper bounds to the terminal response of electrically long lines are obtained, and a simplified low-frequency circuit model is used to explain the mechanism of field-to-wire coupling in a TWP above ground, as well as the role of load balancing on the DM and CM electromagnetic noise induced in the terminal loads.",
"title": ""
},
{
"docid": "1d9b1ce73d8d2421092bb5a70016a142",
"text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"title": ""
},
{
"docid": "6a23480588ca47b9e53de0fd4ff1ecb1",
"text": "We present the nested Chinese restaurant process (nCRP), a stochastic process that assigns probability distributions to ensembles of infinitely deep, infinitely branching trees. We show how this stochastic process can be used as a prior distribution in a Bayesian nonparametric model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of Bayesian nonparametric methods to infer distributions on flexible data structures.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "be06fc67973751b98dd07599e29e4b01",
"text": "The contactless version of the air-filled substrate integrated waveguide (AF-SIW) is introduced for the first time. The conventional AF-SIW configuration requires a pure and flawless connection of the covering layers to the intermediate substrate. To operate efficiently at high frequencies, this requires a costly fabrication process. In the proposed configuration, the boundary condition on both sides around the AF guiding medium is modified to obtain artificial magnetic conductor (AMC) boundary conditions. The AMC surfaces on both sides of the waveguide substrate are realized by a single-periodic structure with the new type of unit cells. The PEC–AMC parallel plates prevent the leakage of the AF guiding region. The proposed contactless AF-SIW shows low-loss performance in comparison with the conventional AF-SIW at millimeter-wave frequencies when the layers of both waveguides are connected poorly.",
"title": ""
},
{
"docid": "4283c9b6b679913648f758abeba2ab93",
"text": "A significant goal of natural language processing (NLP) is to devise a system capable of machine understanding of text. A typical system can be tested on its ability to answer questions based on a given context document. One appropriate dataset for such a system is the Stanford Question Answering Dataset (SQuAD), a crowdsourced dataset of over 100k (question, context, answer) triplets. In this work, we focused on creating such a question answering system through a neural net architecture modeled after the attentive reader and sequence attention mix models.",
"title": ""
},
{
"docid": "285587e0e608d8bafa0962b5cf561205",
"text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "184d34ef560809aad938c0e08939a1bb",
"text": "Mechanical engineers apply principles of motion, energy, force, materials, and mathematics to design and analyze a wide variety of products and systems. The field requires an understanding of core concepts including mechanics, kinematics, thermodynamics, heat transfer, materials science and controls. Mechanical engineers use these core principles along with tools like computer-aided engineering and product life cycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, automotive systems, aircraft, robotics, medical devices, and more. Today, mechanical engineers are pursuing developments in such fields as composites, mechatronics, and nanotechnology, and are helping to create a more sustainable future.",
"title": ""
},
{
"docid": "69dea04dc13754f7f89a1e7b7d973659",
"text": "The nature of congestion feedback largely governs the behavior of congestion control. In datacenter networks, where RTTs are in hundreds of microseconds, accurate feedback is crucial to achieve both high utilization and low queueing delay. Proposals for datacenter congestion control predominantly leverage ECN or even explicit innetwork feedback (e.g., RCP-type feedback) to minimize the queuing delay. In this work we explore latency-based feedback as an alternative and show its advantages over ECN. Against the common belief that such implicit feedback is noisy and inaccurate, we demonstrate that latencybased implicit feedback is accurate enough to signal a single packet’s queuing delay in 10 Gbps networks. DX enables accurate queuing delay measurements whose error falls within 1.98 and 0.53 microseconds using software-based and hardware-based latency measurements, respectively. This enables us to design a new congestion control algorithm that performs fine-grained control to adjust the congestion window just enough to achieve very low queuing delay while attaining full utilization. Our extensive evaluation shows that 1) the latency measurement accurately reflects the one-way queuing delay in single packet level; 2) the latency feedback can be used to perform practical and fine-grained congestion control in high-speed datacenter networks; and 3) DX outperforms DCTCP with 5.33x smaller median queueing delay at 1 Gbps and 1.57x at 10 Gbps.",
"title": ""
},
{
"docid": "3d2060ef33910ef1c53b0130f3cc3ffc",
"text": "Recommender systems help users deal with information overload and enjoy a personalized experience on the Web. One of the main challenges in these systems is the item cold-start problem which is very common in practice since modern online platforms have thousands of new items published every day. Furthermore, in many real-world scenarios, the item recommendation tasks are based on users’ implicit preference feedback such as whether a user has interacted with an item. To address the above challenges, we propose a probabilistic modeling approach called Neural Semantic Personalized Ranking (NSPR) to unify the strengths of deep neural network and pairwise learning. Specifically, NSPR tightly couples a latent factor model with a deep neural network to learn a robust feature representation from both implicit feedback and item content, consequently allowing our model to generalize to unseen items. We demonstrate NSPR’s versatility to integrate various pairwise probability functions and propose two variants based on the Logistic and Probit functions. We conduct a comprehensive set of experiments on two real-world public datasets and demonstrate that NSPR significantly outperforms the state-of-the-art baselines.",
"title": ""
},
{
"docid": "836f0a9a843802dda2b9ca7b166ef5f8",
"text": "Article history: Available online xxxx",
"title": ""
}
] |
scidocsrr
|
f873879d7ab04fc97d9d16d9a84fbb4a
|
Excessive Long-Time Deflections of Prestressed Box Girders . I : Record-Span Bridge in Palau and Other Paradigms
|
[
{
"docid": "40533c0a32bd67ae4e63ddd5f0a92506",
"text": "Synopsis: The present paper presents in chapter 1 a model for the characterization of concrete creep and shrinkage in design of concrete structures (Model B3), which is simpler, agrees better with the experimental data and is better theoretically justified than the previous models. The model complies with the general guidelines recently formulated by RILEM TC-107ß1. Justifications of various aspects of the model and diverse refinements are given in Chapter 2, and many simple explanations are appended in the commentary at the end of Chapter 1 (these parts do not to be read by those who merely want to apply the model). The prediction model B3 is calibrated by a computerized data bank comprising practically all the relevant test data obtained in various laboratories throughout the world. The coefficients of variation of the deviations of the model from the data are distinctly smaller than those for the latest CEB model (1990), and much smaller than those for the previous model in ACI 209 (which was developed in the mid-1960’s). The model is simpler than the previous models (BP and BPKX) developed at Northwestern University, yet it has comparable accuracy and is more rational. The effect of concrete composition and design strength on the model parameters is the main source of error of the model. A method to reduce this error by updating one or two model parameters on the basis of short-time creep tests is given. The updating of model parameters is particularly important for high-strength concretes and other special concretes containing various admixtures, superplasticizers, water-reducing agents and pozzolanic materials. For the updating of shrinkage prediction, a new method in which the shrinkage half-time is calibrated by simultaneous measurements of water loss is presented. This approach circumvents the large sensitivity of the shrinkage extrapolation problem to small changes in the material parameters. The new model allows a more realistic assessment of the creep and shrinkage effects in concrete structures, which significantly affect the durability and long-time serviceability of civil engineering infrastructure.",
"title": ""
}
] |
[
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "1643d808d96ac237a8e1d17704888f16",
"text": "Change is crucial for organizations in growing, highly competitive business environments. Theories of change describe the effectiveness with which organizations are able to modify their strategies, processes, and structures. The action research model, the positive model, and Lewin’s change model indicate the stages of organizational change. This study examined the three stages of Lewin’s model: unfreezing, movement, and refreezing. Although this model establishes general steps, additional information must be considered to adapt these steps to specific situations. This article presents a critical review of change theories for different stages of organizational change. In this critical review, change management offers a constructive framework for managing organizational change throughout different stages of the process. This review has theoretical and practical implications, which are discussed in this article. Immunity to change is also discussed. © 2016 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Introduction and research questions The purpose of the study is to craft the relation between process model and change, this relation describes the ways of implementing change process by leader’s knowledge sharing, and this sharing identifies the stages of change process, and these stages delineate the functional significance between organizational change and change implementation. The organizational life has been made inevitable feature by global, technological and economic pace, and many models of organizational change have acknowledged the influence of implicit dimensions at one stage or more stages of organizational change process (Burke, 2008; Wilkins & Dyer, 1988), and these models imitate different granular levels affecting the process of organizational change, and each level of them identifies distinctive change implementation stages (By, 2005). A model of organizational change in Kurt Lewin’s three steps change process context was introduced in this study; which reflects momentous stages in change implementation process. Kurt Lewin’s model is the early fundamental planned change models explaining the striving forces to maintain the status quo and pushing for change (Lewin, 1947). To change the “quasi-stationary equilibrium” stage, ∗ Corresponding author. E-mail address: talib 14@yahoo.com (S.T. Hussain). one may increase the striving forces for change, or decrease the forces maintaining the status quo, or the combination of both forces for proactive and reactive organizational change through knowledge sharing of individual willingness with the help of stimulating change leadership style. The Lewin’s model was used from an ethnographic study assumed for the investigation of the Lewin’s model for change development, mediates implementation and leadership initiatives for change in complex organizations. The focus of this research on (i) how Lewin’s change model granulates change, (ii) how knowledge sharing affects the change implementation process, (iii) how employees involve in change and willingness to change, and (iv) how leadership style affects the organizational change process in organization. Model of organizational change",
"title": ""
},
{
"docid": "0e16b00e2d9059f3b50754fa8c07cc9d",
"text": "It is a combination of three components: 1) a collection of data structure types (the building blocks of any database that conforms to the model);\n 2) a collection of operators or inferencing rules, which can be applied to any valid instances of the data types listed in (1), to retrieve or derive data from any parts of those structures in any combinations desired;\n 3) a collection of general integrity rules, which implicitly or explicitly define the set of consistent database states or changes of state or both—these rules may sometimes be expressed as insert-update-delete rules.",
"title": ""
},
{
"docid": "bdf3417010f59745e4aaa1d47b71c70e",
"text": "Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatialtemporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [1] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along motion boundary which can greatly reduce the number of valid trajectories while preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-ofthe-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51.",
"title": ""
},
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "84a7592ccf4c79cb5cb4ed7dbbcc1af7",
"text": "AIM\nTo examine the relationships between workplace bullying, destructive leadership and team conflict, and physical health, strain, self-reported performance and intentions to quit among veterinarians in New Zealand, and how these relationships could be moderated by psychological capital and perceived organisational support.\n\n\nMETHODS\nData were collected by means of an online survey, distributed to members of the New Zealand Veterinary Association. Participation was voluntary and all responses were anonymous and confidential. Scores for the variables measured were based on responses to questions or statements with responses categorised on a linear scale. A series of regression analyses were used to assess mediation or moderation by intermediate variables on the relationships between predictor variables and dependent variables.\n\n\nRESULTS\nCompleted surveys were provided by 197 veterinarians, of which 32 (16.2%) had been bullied at work, i.e. they had experienced two or more negative acts at least weekly over the previous 6 months, and nine (4.6%) had experienced cyber-bullying. Mean scores for workplace bullying were higher for female than male respondents, and for non-managers than managers (p<0.01). Scores for workplace bullying were positively associated with scores for destructive leadership and team conflict, physical health, strain, and intentions to quit (p<0.001). Workplace bullying and team conflict mediated the relationship between destructive leadership and strain, physical health and intentions to quit. Perceived organisational support moderated the effects of workplace bullying on strain and self-reported job performance (p<0.05).\n\n\nCONCLUSIONS\nRelatively high rates of negative behaviour were reported by veterinarians in this study, with 16% of participants meeting an established criterion for having been bullied. The negative effects of destructive leadership on strain, physical health and intentions to quit were mediated by team conflict and workplace bullying. It should be noted that the findings of this study were based on a survey of self-selected participants and the findings may not represent the wider population of New Zealand veterinarians.",
"title": ""
},
{
"docid": "0368698acbd67accbb06e9a6d2559985",
"text": "Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%.",
"title": ""
},
{
"docid": "8aafa283b228bbaa7ff3e37e7ca0a861",
"text": "In order to meet the continuously increasing demands for high throughput in wireless networks, IEEE 802 LAN/MAN Standard Committee is developing IEEE 802.11ax: a new amendment for the Wi-Fi standard. This amendment provides various ways to improve the efficiency of Wi-Fi. The most revolutionary one is OFDMA. Apart from obvious advantages, such as decreasing overhead for short packet transmission at high rates and improving robustness to frequency selective interference, being used for uplink transmission, OFDMA can increase power spectral density and, consequently, user data rates. However, the gain of OFDMA mainly depends on the resource scheduling between users. The peculiarities of OFDMA implementation in Wi-Fi completely change properties of classic schedulers used in other OFDMA systems, e.g. LTE. In the paper, we consider the usage of OFDMA in Wi-Fi for uplink transmission. We study peculiarities of OFDMA in Wi-Fi, adapt classic schedulers to Wi-Fi, explaining why they do not perform well. Finally we develop a novel scheduler, MUTAX, and evaluate its performance with simulation.",
"title": ""
},
{
"docid": "4dd403bbecb8d03ebdd8de9923ee629b",
"text": "Phishing is a major problem on the Web. Despite the significant attention it has received over the years, there has been no definitive solution. While the state-of-the-art solutions have reasonably good performance, they require a large amount of training data and are not adept at detecting phishing attacks against new targets. In this paper, we begin with two core observations: (a) although phishers try to make a phishing webpage look similar to its target, they do not have unlimited freedom in structuring the phishing webpage, and (b) a webpage can be characterized by a small set of key terms, how these key terms are used in different parts of a webpage is different in the case of legitimate and phishing webpages. Based on these observations, we develop a phishing detection system with several notable properties: it requires very little training data, scales well to much larger test data, is language-independent, fast, resilient to adaptive attacks and implemented entirely on client-side. In addition, we developed a target identification component that can identify the target website that a phishing webpage is attempting to mimic. The target detection component is faster than previously reported systems and can help minimize false positives in our phishing detection system.",
"title": ""
},
{
"docid": "0bd7956dbee066a5b7daf4cbd5926f35",
"text": "Computer networks lack a general control paradigm, as traditional networks do not provide any networkwide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.",
"title": ""
},
{
"docid": "6b73e2bf2c8de87e9ab749b1d72d3515",
"text": "We present a robust framework for estimating non-rigid 3D shape and motion in video sequences. Given an input video sequence, and a user-specified region to reconstruct, the algorithm automatically solves for the 3D time-varying shape and motion of the object, and estimates which pixels are outliers, while learning all system parameters, including a PDF over non-rigid deformations. There are no user-tuned parameters (other than initialization); all parameters are learned by maximizing the likelihood of the entire image stream. We apply our method to both rigid and non-rigid shape reconstruction, and demonstrate it in challenging cases of occlusion and variable illumination.",
"title": ""
},
{
"docid": "f47ff71a0fb0363c5c27d2579ee1961a",
"text": "The advent of 4G LTE has ushered in a growing demand for embedded antennas that can cover a wide range of frequency bands from 698 MHz to 2.69 GHz. A novel active antenna design is presented in this paper that is capable of covering a wide range of LTE bands while being constrained to a 1.8 cm3 volume. The antenna structure utilizes Ethertronics EtherChip 2.0 to add tunability to the antenna structure. The paper details the motivation behind developing the antenna and further discusses the fabrication of the active antenna architecture on an evaluation board and presents the measured results.",
"title": ""
},
{
"docid": "44f1016cb2dfebbb8500a35985dddac0",
"text": "Classification of entities based on the underlying network structure is an important problem. Networks encountered in practice are sparse and have many missing and noisy links. Statistical learning techniques have been used in intra-network classification; however, they typically exploit only the local neighborhood, so may not perform well. In this paper, we propose a novel structural neighborhood-based classifier learning using a random walk. For classifying a node, we take a random walk from the node and make a decision based on how nodes in the respective k^th-level neighborhood are labeled. We observe that random walks of short length are helpful in classification. Emphasizing role of longer random walks may cause the underlying Markov chain to converge to a stationary distribution. Considering this, we take a lazy random walk based approach with variable termination probability for each node, based on the node's structural properties including its degree. Our experimental study on real world datasets demonstrates the superiority of the proposed approach over the existing state-of-the-art approaches.",
"title": ""
},
{
"docid": "c62a2280367b4d7c6a715c92a9696bae",
"text": "OBJECTIVES\nPain assessment is essential to tailor intensive care of neonates. The present focus is on acute procedural pain; assessment of pain of longer duration remains a challenge. We therefore tested a modified version of the COMFORT-behavior scale-named COMFORTneo-for its psychometric qualities in the Neonatal Intensive Care Unit setting.\n\n\nMETHODS\nIn a clinical observational study, nurses assessed patients with COMFORTneo and Numeric Rating Scales (NRS) for pain and distress, respectively. Interrater reliability, concurrent validity, and sensitivity to change were calculated as well as sensitivity and specificity for different cut-off scores for subsets of patients.\n\n\nRESULTS\nInterrater reliability was good: median linearly weighted Cohen kappa 0.79. Almost 3600 triple ratings were obtained for 286 neonates. Internal consistency was good (Cronbach alpha 0.84 and 0.88). Concurrent validity was demonstrated by adequate and good correlations, respectively, with NRS-pain and NRS-distress: r=0.52 (95% confidence interval 0.44-0.59) and r=0.70 (95% confidence interval 0.64-0.75). COMFORTneo cut-off scores of 14 or higher (score range is 6 to 30) had good sensitivity and specificity (0.81 and 0.90, respectively) using NRS-pain or NRS-distress scores of 4 or higher as criterion.\n\n\nDISCUSSION\nThe COMFORTneo showed preliminary reliability. No major differences were found in cut-off values for low birth weight, small for gestational age, neurologic impairment risk levels, or sex. Multicenter studies should focus on establishing concurrent validity with other instruments in a patient group with a high probability of ongoing pain.",
"title": ""
},
{
"docid": "8f83c7efb262f996f67424412f6b2ddb",
"text": "Apache ZooKeeper is a distributed data storage that is highly concurrent and asynchronous due to network communication, testing such a system is very challenging. Our solution using the tool \"Modbat\" generates test cases for concurrent client sessions, and processes results from synchronous and asynchronous callbacks. We use an embedded model checker to compute the test oracle for non-deterministic outcomes, the oracle model evolves dynamically with each new test step. Our work has detected multiple previously unknown defects in ZooKeeper. Finally, a thorough coverage evaluation of the core classes show how code and branch coverage strongly relate to feature coverage in the model, and hence modeling effort.",
"title": ""
},
{
"docid": "a6dff88ee5b1bfa2c7a4db85cd052815",
"text": "OBJECTIVE\nTo determine the effectiveness of 3-dimensional therapy in the treatment of adolescent idiopathic scoliosis.\n\n\nMETHODS\nWe carried out this study with 50 patients whose average age was 14.15 +/-1.69 years at the Physical Therapy and Rehabilitation School, Hacettepe University, Ankara, Turkey, from 1999 to 2004. We treated them as outpatients, 5 days a week, in a 4-hour program for the first 6 weeks. After that, they continued with the same program at home. We evaluated the Cobb angle, vital capacity and muscle strength of the patients before treatment, and after 6 weeks, 6 months and one year, and compared all the results.\n\n\nRESULTS\nThe average Cobb angle, which was 26.10 degrees on average before treatment, was 23.45 degrees after 6 weeks, 19.25 degrees after 6 months and 17.85 degrees after one year (p<0.01). The vital capacities, which were on average 2795 ml before treatment, reached 2956 ml after 6 weeks, 3125 ml after 6 months and 3215 ml after one year (p<0.01). Similarly, according to the results of evaluations after 6 weeks, 6 months and one year, we observed an increase in muscle strength and recovery of the postural defects in all patients (p<0.01).\n\n\nCONCLUSION\nSchroth`s technique positively influenced the Cobb angle, vital capacity, strength and postural defects in outpatient adolescents.",
"title": ""
},
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
},
{
"docid": "9e2db834da4eb5d226afec4f8dd58c4c",
"text": "This paper introduces a new hand gesture recognition technique to recognize Arabic sign language alphabet and converts it into voice correspondences to enable Arabian deaf people to interact with normal people. The proposed technique captures a color image for the hand gesture and converts it into YCbCr color space that provides an efficient and accurate way to extract skin regions from colored images under various illumination changes. Prewitt edge detector is used to extract the edges of the segmented hand gesture. Principal Component Analysis algorithm is applied to the extracted edges to form the predefined feature vectors for signs and gestures library. The Euclidean distance is used to measure the similarity between the signs feature vectors. The nearest sign is selected and the corresponding sound clip is played. The proposed technique is used to recognize Arabic sign language alphabets and the most common Arabic gestures. Specifically, we applied the technique to more than 150 signs and gestures with accuracy near to 97% at real time test for three different signers. The detailed of the proposed technique and the experimental results are discussed in this paper.",
"title": ""
},
{
"docid": "67c74094c42c06d88401ae81b1429956",
"text": "Research, first published over a decade ago, has shown that every 10% increase in the number of registered nurses (RNs) educated with the Bachelor of Science in Nursing (BSN) in hospital staff is associated with a 4 % decrease in the risk of death for patients.' Nurse staffs with higher proportions of BSN and Master of Science in Nursing (MSN) prepared nurses demonstrate increased productivity and better patient outcomes.^-^''''^' ' Therefore, in 2008 the American Nurses Association (ANA) House of Delegates resolved to support initiatives that require new diploma and associate degree (AD) prepared RNs to complete the BSN within ten years after initial licensure, exempting those individuals who are already licensed or enrolled as students in diploma or AD programs when legislation is enacted.' The Ohio Nurses Association (ONA) adopted this resolution in 2009 and the Ohio State Nursing Students'Association (OSNA) has endorsed the BSN in Ten initiative.",
"title": ""
},
{
"docid": "db2ebec1eeec213a867b10fe9550bfc7",
"text": "Photovoltaic method is very popular for generating electrical power. Its energy production depends on solar radiation on that location and orientation. Shadow rapidly decreases performance of the Photovoltaic system. In this research, it is being investigated that how exactly real-time shadow can be detected. In principle, 3D city models containing roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. An automated procedure to measure exact shadow effect from the 3D city models and a long-term simulation model to determine the produced energy from the photovoltaic system is being developed here. In this paper, a method for detecting shadow for direct radiation has been discussed with its result using a 3D city model to perform a solar energy potentiality analysis. Figure 1. Partial Shadow on PV array (Reisa 2011). Former military area Scharnhauser Park shown in figure 2 has been choosen as the case study area for this research. It is an urban conversion and development area of 150 hecta res in the community of Ostfildern on the southern border near Stuttgart with 7000 inhabitants. About 80% heating energy demand of the whole area is supplied by renewable energies and a small portion of electricity is delivered by existing roof top photovoltaic system (Tereci et al, 2009). This has been selected as the study area for this research because of availability CityGML and LIDAR data, building footprints and existing photovoltaic cells on roofs and façades. Land Survey Office Baden-Wüttemberg provides the laser scanning data with a density of 4 points per square meter at a high resolution of 0.2 meter. The paper has been organized with a brief introduction at the beginning explaining background of photovoltaic energy and motivation for this research in. Then the effect of shadow on photovoltaic cells and a methodology for detecting shadow from direct radiation. Then result has been shown applying the methodology and some brief idea about the future work of this research has been presented.",
"title": ""
}
] |
scidocsrr
|
60f63d99f7e8b5b0cbd892a65ccb2833
|
Fetus-in-fetu: a pediatric rarity
|
[
{
"docid": "d1be704e4d81ab1466482a4924f00474",
"text": "Fetus-in-fetu (FIF) is a rare congenital condition in which a fetiform mass is detected in the host abdomen and also in other sites such as the intracranium, thorax, head, and neck. This condition has been rarely reported in the literature. Herein, we report the case of a fetus presenting with abdominal cystic mass and ascites and prenatally diagnosed as meconium pseudocyst. Explorative laparotomy revealed an irregular fetiform mass in the retroperitoneum within a fluid-filled cyst. The mass contained intestinal tract, liver, pancreas, and finger. Fetal abdominal cystic mass has been identified in a broad spectrum of diseases. However, as in our case, FIF is often overlooked during differential diagnosis. FIF should also be differentiated from other conditions associated with fetal abdominal masses.",
"title": ""
},
{
"docid": "972288070e8950cdb38410c30758d708",
"text": "INTRODUCTION\nFetus in fetu is an extremely rare condition wherein a malformed fetus is found in the abdomen of its twin. This entity is differentiated from teratoma by its embryological origin, its unusual location in the retroperitoneal space, and the presence of vertebral organization with limb buds and well-developed organ systems. The literature cites less than 100 cases worldwide of twin fetus in fetu.\n\n\nCASE PRESENTATION\nA two-and-a-half-month-old Asian Indian baby boy had two malformed fetuses in his abdomen. The pre-operative diagnosis was made by performing an ultrasound and a 64-slice computer tomography scan of the baby's abdomen. Two fetoid-like masses were successfully excised from the retroperitoneal area of his abdomen. A macroscopic examination, an X-ray of the specimen after operation, and the histological features observed were suggestive of twin fetus in fetu.\n\n\nCONCLUSION\nFetus in fetu is an extremely rare condition. Before any operation is carried out on a patient, imaging studies should first be conducted to differentiate this condition from teratoma. Surgical excision is a curative procedure, and a macroscopic examination of the sac should be done after twin or multiple fetus in fetu are excised.",
"title": ""
}
] |
[
{
"docid": "e7ecd827a48414f1f533fb30de203a6a",
"text": "Followership has been an understudied topic in the academic literature and an underappreciated topic among practitioners. Although it has always been important, the study of followership has become even more crucial with the advent of the information age and dramatic changes in the workplace. This paper provides a fresh look at followership by providing a synthesis of the literature and presents a new model for matching followership styles to leadership styles. The model’s practical value lies in its usefulness for describing how leaders can best work with followers, and how followers can best work with leaders.",
"title": ""
},
{
"docid": "3a91fef8ea690b5027e70ae1051ad136",
"text": "We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial (and arbitrary), we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of four letter words, capturing ∼ 92% of the multi–information among letters and even ‘discovering’ real words that were not represented in the data from which the pairwise correlations were estimated. The maximum entropy model defines an energy landscape on the space of possible words, and local minima in this landscape account for nearly two–thirds of words used in written English.",
"title": ""
},
{
"docid": "d437d71047b70736f5a6cbf3724d62a9",
"text": "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"title": ""
},
{
"docid": "13a23fe61319bc82b8b3e88ea895218c",
"text": "A new generation of robots is being designed for human occupied workspaces where safety is of great concern. This research demonstrates the use of a capacitive skin sensor for collision detection. Tests demonstrate that the sensor reduces impact forces and can detect and characterize collision events, providing information that may be used in the future for force reduction behaviors. Various parameters that affect collision severity, including interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth are also explored using the sensor to provide information about the contact force at the site of impact. Joint stiffness is made independent of controller bandwidth limitations using passive torsional springs of various stiffnesses. Results indicate a positive correlation between peak impact force and joint stiffness, skin friction and interface stiffness, with implications for future skin and robot link designs and post-collision behaviors.",
"title": ""
},
{
"docid": "d9791131cefcf0aa18befb25c12b65b2",
"text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.",
"title": ""
},
{
"docid": "4453c85d0fc1513e9657731d84896864",
"text": "A number of studies have looked at the prevalence rates of psychiatric disorders in the community in Pakistan over the last two decades. However, a very little information is available on psychiatric morbidity in primary health care. We therefore decided to measure prevalence of psychiatric disorders and their correlates among women from primary health care facilities in Lahore. We interviewed 650 women in primary health care settings in Lahore. We used a semi-structured interview and questionnaires to collect information during face-to-face interviews. Nearly two-third of the women (64.3%) in our study were diagnosed to have a psychiatric problem, while one-third (30.4%) suffered with Major Depressive Disorder. Stressful life events, verbal violence and battering were positively correlated with psychiatric morbidity and social support, using reasoning to resolve conflicts and education were negatively correlated with psychiatric morbidity. The prevalence of psychiatric disorders is in line with the prevalence figures found in community studies. Domestic violence is an important correlate which can be the focus of interventions.",
"title": ""
},
{
"docid": "367782d15691c3c1dfd25220643752f0",
"text": "Music streaming services increasingly incorporate additional music taxonomies (i.e., mood, activity, and genre) to provide users different ways to browse through music collections. However, these additional taxonomies can distract the user from reaching their music goal, and influence choice satisfaction. We conducted an online user study with an application called \"Tune-A-Find,\" where we measured participants' music taxonomy choice (mood, activity, and genre). Among 297 participants, we found that the chosen taxonomy is related to personality traits. We found that openness to experience increased the choice for browsing music by mood, while conscientiousness increased the choice for browsing music by activity. In addition, those high in neuroticism were most likely to browse for music by activity or genre. Our findings can support music streaming services to further personalize user interfaces. By knowing the user's personality, the user interface can adapt to the user's preferred way of music browsing.",
"title": ""
},
{
"docid": "2ee9ed8260e63721b8525724b0d65d5e",
"text": "Deep neural network classifiers are vulnerable to small input perturbations carefully generated by the adversaries. Injecting adversarial inputs during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). This additional regularization encourages two similar images (clean and perturbed versions) to produce the same outputs, not necessarily the true labels, enhancing classifier’s robustness against pixel level perturbation. Next, we show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we also propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhance the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.",
"title": ""
},
{
"docid": "3a5ef0db1fbbebd7c466a3b657e5e173",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "c313450c7a72941060432d4e000d8ba0",
"text": "We propose an approach to generate geometric theorems from electronic images of diagrams automatically. The approach makes use of techniques of Hough transform to recognize geometric objects and their labels and of numeric verification to mine basic geometric relations. Candidate propositions are generated from the retrieved information by using six strategies and geometric theorems are obtained from the candidates via algebraic computation. Experiments with a preliminary implementation illustrate the effectiveness and efficiency of the proposed approach for generating nontrivial theorems from images of diagrams. This work demonstrates the feasibility of automated discovery of profound geometric knowledge from simple image data and has potential applications in geometric knowledge management and education.",
"title": ""
},
{
"docid": "b4002e27c1c656d71dc4277ea0cca9a9",
"text": "This paper proposes a distributionally robust approach to logistic regression. We use the Wasserstein distance to construct a ball in the space of probability distributions centered at the uniform distribution on the training samples. If the radius of this ball is chosen judiciously, we can guarantee that it contains the unknown datagenerating distribution with high confidence. We then formulate a distributionally robust logistic regression model that minimizes a worst-case expected logloss function, where the worst case is taken over all distributions in the Wasserstein ball. We prove that this optimization problem admits a tractable reformulation and encapsulates the classical as well as the popular regularized logistic regression problems as special cases. We further propose a distributionally robust approach based on Wasserstein balls to compute upper and lower confidence bounds on the misclassification probability of the resulting classifier. These bounds are given by the optimal values of two highly tractable linear programs. We validate our theoretical out-of-sample guarantees through simulated and empirical experiments.",
"title": ""
},
{
"docid": "eefb6ec5984b6641baedecc0bf3b44c4",
"text": "Gradient descent is prevalent for large-scale optimization problems in machine learning; especially it nowadays plays a major role in computing and correcting the connection strength of neural networks in deep learning. However, many gradient-based optimization methods contain more sensitive hyper-parameters which require endless ways of configuring. In this paper, we present a novel adaptive mechanism called adaptive exponential decay rate (AEDR). AEDR uses an adaptive exponential decay rate rather than a fixed and preconfigured one, and it can allow us to eliminate one otherwise tuning sensitive hyper-parameters. AEDR also can be used to calculate exponential decay rate adaptively by employing the moving average of both gradients and squared gradients over time. The mechanism is then applied to Adadelta and Adam; it reduces the number of hyper-parameters of Adadelta and Adam to only a single one to be turned. We use neural network of long short-term memory and LeNet to demonstrate how learning rate adapts dynamically. We show promising results compared with other state-of-the-art methods on four data sets, the IMDB (movie reviews), SemEval-2016 (sentiment analysis in twitter) (IMDB), CIFAR-10 and Pascal VOC-2012.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "ea411e1666cf9f9e1220b0ec642d45de",
"text": "The night sky remains a largely unexplored frontier for biologists studying the behavior and physiology of free-ranging, nocturnal organisms. Conventional imaging tools and techniques such as night-vision scopes, infrared-reflectance cameras, flash cameras, and radar provide insufficient detail for the scale and resolution demanded by field researchers. A new tool is needed that is capable of imaging noninvasively in the dark at high-temporal and spatial resolution. Thermal infrared imaging represents the most promising such technology that is poised to revolutionize our ability to observe and document the behavior of free-ranging organisms in the dark. Herein we present several examples from our research on free-ranging bats that highlight the power and potential of thermal infrared imaging for the study of animal behavior, energetics and censusing of large colonies, among others. Using never-before-seen video footage and data, we have begun to answer questions that have puzzled biologists for decades, as well as to generate new hypotheses and insight. As we begin to appreciate the functional significance of the aerosphere as a dynamic environment that affects organisms at different spatial and temporal scales, thermal infrared imaging can be at the forefront of the effort to explore this next frontier.",
"title": ""
},
{
"docid": "bbe503ddce5f16bd968e4419d74e805b",
"text": "The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of “FinTech,” which represents the marriage of “finance” and “information technology.” FinTech provides opportunities for the creation of new services and business models and poses challenges to traditional financial service providers. Therefore, FinTech has become a subject of debate among practitioners, investors, and researchers and is highly visible in the popular media. In this study, we unveil the drivers motivating the FinTech phenomenon perceived by the English and German popular press including the subjects discussed in the context of FinTech. This study is the first one to reflect the media perspective on the FinTech phenomenon in the research. In doing so, we extend the growing knowledge on FinTech and contribute to a common understanding in the financial and digital innovation literature. These study contributes to research in the areas of information systems, finance and interdisciplinary social sciences. Moreover, it brings value to practitioners (entrepreneurs, investors, regulators, etc.), who explore the field of FinTech.",
"title": ""
},
{
"docid": "d12d51010fcf4433c5a74a6fbead5cb5",
"text": "This paper introduces the power-density and temperature induced issues in the modern on-chip systems. In particular, the emerging Dark Silicon problem is discussed along with critical research challenges. Afterwards, an overview of key research efforts and concepts is presented that leverage dark silicon for performance and reliability optimization. In case temperature constraints are violated, an efficient dynamic thermal management technique is employed.",
"title": ""
},
{
"docid": "9d8b0a97eb195c972c1c0d989625a600",
"text": "Emerging millimeter-wave frequency applications require high performance, low-cost and compact devices and circuits. This is the reason why the Substrate Integrated Waveguide (SIW) technology, which combines some advantages of planar circuits and metallic waveguides, has focused a lot of attention in recent years. However, not all three-dimensional metallic waveguide devices and circuit are integrable in planar form. In its first section, this paper reviews recently proposed three-dimensional SIW devices that are taking advantages of the third-dimension to achieve either more compact or multidimensional circuits at millimeter wave frequencies. Also, in a second section, special interest is oriented to recent development of air-filled SIW based on low-cost multilayer printed circuit board (PCB) for high performance millimeter-wave substrate integrated circuits and systems.",
"title": ""
},
{
"docid": "6341ff36d4cdbc10f4bd864c95c89be2",
"text": "OBJECTIVE\nThe aim of this study was to evaluate the antibiotic resistance pattern of Psedomonas aeruginosa and its prevalence in patients with urinary tract infections (UTI) for effective treatment in a developing country like Pakistan.\n\n\nMETHODS\nThis is an observational study conducted for a period of ten months which ended on December 2013 at the Dr. Essa Laboratory and Diagnostic Centre in Karachi. A total of 4668 urine samples of UTI patients were collected and standard microbiological techniques were performed to identify the organisms in urine cultures. Antibiotic susceptibility testing was performed by Kirby-Bauer technique for twenty five commonly used antimicrobials and then analyzed on SPSS version 17.\n\n\nRESULTS\nP. aeruginosa was isolated in 254 cultures (5.4%). The most resistant drugs included Ceclor(100%) and Cefizox (100%) followed by Amoxil/Ampicillin (99.6%), Ceflixime (99.6%), Doxycycline (99.6%), Cefuroxime (99.2%), Cephradine (99.2%), Cotrimoxazole (99.2%), Nalidixic acid (98.8%), Pipemidic acid (98.6%) and Augmentin (97.6%).\n\n\nCONCLUSION\nEmerging resistant strains of Pseudomonas aeruginosa are potentially linked to injudicious use of drugs leading to ineffective empirical therapy and in turn, appearance of even more resistant strains of the bacterium. Therefore, we recommend culture and sensitivity testing to determine the presence of P.aeruginosa prior to specific antimicrobial therapy.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "5d52830a1f24dfb74f9425dbc376728e",
"text": "In this paper, the performance of air-cored (ironless) stator axial flux permanent magnet machines with different types of concentrated-coil nonoverlapping windings is evaluated. The evaluation is based on theoretical analysis and is confirmed by finite-element analysis and measurements. It is shown that concentrated-coil winding machines can have a similar performance as that of normal overlapping winding machines using less copper.",
"title": ""
}
] |
scidocsrr
|
28d3b4dc16f47c32f28420a2dadd1e5e
|
Gorillas in our midst: sustained inattentional blindness for dynamic events.
|
[
{
"docid": "e997f8468d132f1e28e0d6a8801f6fb1",
"text": "Change-blindness, occurs when large changes are missed under natural viewing conditions because they occur simultaneously with a brief visual disruption, perhaps caused by an eye movement,, a flicker, a blink, or a camera cut in a film sequence. We have found that this can occur even when the disruption does not cover or obscure the changes. When a few small, high-contrast shapes are briefly spattered over a picture, like mudsplashes on a car windscreen, large changes can be made simultaneously in the scene without being noticed. This phenomenon is potentially important in driving, surveillance or navigation, as dangerous events occurring in full view can go unnoticed if they coincide with even very small, apparently innocuous, disturbances. It is also important for understanding how the brain represents the world.",
"title": ""
}
] |
[
{
"docid": "4aa1e87816ea5850339611d242edb1f4",
"text": "A scientific understanding of emotion experience requires information on the contexts in which the emotion is induced. Moreover, as one of the primary functions of music is to regulate the listener's mood, the individual's short-term music preference may reveal the emotional state of the individual. In light of these observations, this paper presents the first scientific study that exploits the online repository of social data to investigate the connections between a blogger's emotional state, user context manifested in the blog articles, and the content of the music titles the blogger attached to the post. A number of computational models are developed to evaluate the accuracy of different content or context cues in predicting emotional state, using 40,000 pieces of music listening records collected from the social blogging website LiveJournal. Our study shows that it is feasible to computationally model the latent structure underlying music listening and mood regulation. The average area under the receiver operating characteristic curve (AUC) for the content-based and context-based models attains 0.5462 and 0.6851, respectively. The association among user mood, music emotion, and individual's personality is also identified.",
"title": ""
},
{
"docid": "30ffdf90936f4b3c8feba45ae1449691",
"text": "Abstract Given a graph with node attributes, what neighborhoods1 are anomalous? To answer this question, one needs a quality score that utilizes both structure and attributes. Popular existing measures either quantify the structure only and ignore the attributes (e.g., conductance), or only consider the connectedness of the nodes inside the neighborhood and ignore the cross-edges at the boundary (e.g., density). In this work we propose normality, a new quality measure for attributed neighborhoods. Normality utilizes structure and attributes together to quantify both internal consistency and external separability. It exhibits two key advantages over other measures: (1) It allows many boundaryedges as long as they can be “exonerated”; i.e., either (i) are expected under a null model, and/or (ii) the boundary nodes do not exhibit the subset of attributes shared by the neighborhood members. Existing measures, in contrast, penalize boundary edges irrespectively. (2) Normality can be efficiently maximized to automatically infer the shared attribute subspace (and respective weights) that characterize a neighborhood. This efficient optimization allows us to process graphs with millions of attributes. We capitalize on our measure to present a novel approach for Anomaly Mining of Entity Neighborhoods (AMEN). Experiments on real-world attributed graphs illustrate the effectiveness of our measure at anomaly detection, outperforming popular approaches including conductance, density, OddBall, and SODA. In addition to anomaly detection, our qualitative analysis demonstrates the utility of normality as a powerful tool to contrast the correlation between structure and attributes across different graphs.",
"title": ""
},
{
"docid": "e79a335fb5dc6e2169484f8ac4130b35",
"text": "We obtained expressions for TE and TM modes of the planar hyperbolic secant (HS) waveguide. We found waveguide parameters for which the fundamental mode has minimal width. By FDTD-simulation we show propagation of TE-modes and periodical reconstruction of non-modal fields in bounded HS-waveguides. We show that truncated HS-waveguide focuses plane wave into spot with diameter 0.132 of wavelength.",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "3b1addbef50c5020b88ae2e55c197085",
"text": "In this paper, we present a novel wide-band envelope detector comprising a fully-differential operational transconductance amplifier (OTA), a full-wave rectifier and a peak detector. To enhance the frequency performance of the envelop detector, we utilize a gyrator-C active inductor load in the OTA for wider bandwidth. Additionally, it is shown that the high-speed rectifier of the envelope detector requires high bias current instead of the sub-threshold bias condition. The experimental results show that the proposed envelope detector can work from 100-Hz to 1.6-GHz with an input dynamic range of 50-dB at 100-Hz and 40-dB at 1.6-GHz, respectively. The envelope detector was fabricated on the TSMC 0.18-um CMOS process with an active area of 0.652 mm2.",
"title": ""
},
{
"docid": "4162c6bbaac397ff24e337fa4af08abd",
"text": "We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "226392eec365706465eb9937b07f16b1",
"text": "Current evidence suggests that all of the major events in hominin evolution have occurred in East Africa. Over the last two decades, there has been intensive work undertaken to understand African palaeoclimate and tectonics in order to put together a coherent picture of how the environment of East Africa has varied in the past. The landscape of East Africa has altered dramatically over the last 10 million years. It has changed from a relatively flat, homogenous region covered with mixed tropical forest, to a varied and heterogeneous environment, with mountains over 4 km high and vegetation ranging from desert to cloud forest. The progressive rifting of East Africa has also generated numerous lake basins, which are highly sensitive to changes in the local precipitation-evaporation regime. There is now evidence that the presence of precession-driven, ephemeral deep-water lakes in East Africa were concurrent with major events in hominin evolution. It seems the unusual geology and climate of East Africa created periods of highly variable local climate, which, it has been suggested could have driven hominin speciation, encephalisation and dispersal out of Africa. One example is the significant hominin speciation and brain expansion event at ~1.8 Ma that seems to have been coeval with the occurrence of highly variable, extensive, deep-water lakes. This complex, climatically very variable setting inspired first the variability selection hypothesis, which was then the basis for the pulsed climate variability hypothesis. The newer of the two suggests that the long-term drying trend in East Africa was punctuated by episodes of short, alternating periods of extreme humidity and aridity. Both hypotheses, together with other key theories of climate-evolution linkages, are discussed in this paper. Though useful the actual evolution mechanisms, which led to early hominins are still unclear and continue to be debated. However, it is clear that an understanding of East African lakes and their palaeoclimate history is required to understand the context within which humans evolved and eventually left East Africa. © 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/).",
"title": ""
},
{
"docid": "46326a60018e55397ecdc23a67afdc01",
"text": "Human communication includes information, opinions and reactions. Reactions are often captured by the affective-messages in written as well as verbal communications. While there has been work in affect modeling and to some extent affective content generation, the area of affective word distributions is not well studied. Synsets and lexica capture semantic relationships across words. These models, however, lack in encoding affective or emotional word interpretations. Our proposed model, Aff2Vec, provides a method for enriched word embeddings that are representative of affective interpretations of words. Aff2Vec outperforms the state-of-the-art in intrinsic word-similarity tasks. Further, the use of Aff2Vec representations outperforms baseline embeddings in downstream natural language understanding tasks including sentiment analysis, personality detection, and frustration prediction.",
"title": ""
},
{
"docid": "c7a15659f2fe5f67da39b77a3eb19549",
"text": "Privacy breaches and their regulatory implications have attracted corporate attention in recent times. An often overlooked cause of privacy breaches is human error. In this study, we first apply a model based on the widely accepted GEMS error typology to analyze publicly reported privacy breach incidents within the U.S. Then, based on an examination of the causes of the reported privacy breach incidents, we propose a defense-in-depth solution strategy founded on error avoidance, error interception, and error correction. Finally, we illustrate the application of the proposed strategy to managing human error in the case of the two leading causes of privacy breach incidents. This study finds that mistakes in the information processing stage constitute the most cases of human errorrelated privacy breach incidents, clearly highlighting the need for effective policies and their enforcement in organizations. a 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e4227748f8fd9704aba160669dcdef52",
"text": "Broadly, artificial intelligence (AI) mainly entails technology constellations such as machine learning, natural language processing, perception, and reasoning since it is difficult to define [1]. Even though the field’s application and principles have undergone investigation for more than sixty-five years, modern improvements, attendant society excitement, and uses ensured its return to focus. The influence of the previous artificial intelligence systems is evident, introducing both opportunities and challenges, which enables the integration of future AI advances into the economic and social environments. It is apparent that most people today view AI as a robotics concept but it essentially incorporates broader technology ranges that are used widely [2]. From search engines to speech recognition, to learning/gaming structures and object detection, AI application has the potential to intensify in the human daily lives. The application is already experiencing use in the world of business as companies seek to study the needs of the consumers, as well as, other fields including healthcare and crime investigation. In this paper, I will discuss the perceptions of consumers regarding artificial intelligence and outline its impact in retail, healthcare, crime investigation, and employment.",
"title": ""
},
{
"docid": "4791e1e3ccde1260887d3a80ea4577b6",
"text": "The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "9abd7aedf336f32abed7640dd3f4d619",
"text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.",
"title": ""
},
{
"docid": "c1672220aef9aa7a6257d8ff644ae378",
"text": "We present Component-Based Simplex Architecture (CBSA), a new framework for assuring the runtime safety of component-based cyber-physical systems (CPSs). CBSA integrates Assume-Guarantee (A-G) reasoning with the core principles of the Simplex control architecture to allow component-based CPSs to run advanced, uncertified controllers while still providing runtime assurance that A-G contracts and global properties are satisfied. In CBSA, multiple Simplex instances, which can be composed in a nested, serial or parallel manner, coordinate to assure system-wide properties. Combining A-G reasoning and the Simplex architecture is a challenging problem that yields significant benefits. By utilizing A-G contracts, we are able to compositionally determine the switching logic for CBSAs, thereby alleviating the state explosion encountered by other approaches. Another benefit is that we can use A-G proof rules to decompose the proof of system-wide safety assurance into sub-proofs corresponding to the component-based structure of the system architecture. We also introduce the notion of coordinated switching between Simplex instances, a key component of our compositional approach to reasoning about CBSA switching logic. We illustrate our framework with a component-based control system for a ground rover. We formally prove that the CBSA for this system guarantees energy safety (the rover never runs out of power), and collision freedom (the rover never collides with a stationary obstacle). We also consider a CBSA for the rover that guarantees mission completion: all target destinations visited within a prescribed amount of time.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "670ad989fb45d87b898aafe571bac3a9",
"text": "As an emerging technology to support scalable content-based image retrieval (CBIR), hashing has recently received great attention and became a very active research domain. In this study, we propose a novel unsupervised visual hashing approach called semantic-assisted visual hashing (SAVH). Distinguished from semi-supervised and supervised visual hashing, its core idea is to effectively extract the rich semantics latently embedded in auxiliary texts of images to boost the effectiveness of visual hashing without any explicit semantic labels. To achieve the target, a unified unsupervised framework is developed to learn hash codes by simultaneously preserving visual similarities of images, integrating the semantic assistance from auxiliary texts on modeling high-order relationships of inter-images, and characterizing the correlations between images and shared topics. Our performance study on three publicly available image collections: Wiki, MIR Flickr, and NUS-WIDE indicates that SAVH can achieve superior performance over several state-of-the-art techniques.",
"title": ""
},
{
"docid": "5325beaeca7307b20d18b0ce79a2819e",
"text": "It is becoming increasingly necessary for organizations to build a Cyber Threat Intelligence (CTI) platform to fight against sophisticated attacks. To reduce the risk of cyber attacks, security administrators and/or analysts can use a CTI platform to aggregate relevant threat information about adversaries, targets and vulnerabilities, analyze it and share key observations from the analysis with collaborators. In this paper, we introduce CyTIME (Cyber Threat Intelligence ManagEment framework) which is a framework for managing CTI data. CyTIME can periodically collect CTI data from external CTI data repositories via standard interfaces such as Trusted Automated Exchange of Indicator Information (TAXII). In addition, CyTIME is designed to automatically generate security rules without human intervention to mitigate discovered new cybersecurity threats in real time. To show the feasibility of CyTIME, we performed experiments to measure the time to complete the task of generating the security rule corresponding to a given CTI data. We used 1,000 different CTI files related to network attacks. Our experiment results demonstrate that CyTIME automatically generates security rules and store them into the internal database within 12.941 seconds on average (max = 13.952, standard deviation = 0.580).",
"title": ""
},
{
"docid": "0745755e5347c370cdfbeca44dc6d288",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
}
] |
scidocsrr
|
7a0f62907aa81d85d6c10fea67548d64
|
Shared Embedding Based Neural Networks for Knowledge Graph Completion
|
[
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
}
] |
[
{
"docid": "bd7664e9ff585a48adca12c0a8d9bf95",
"text": "Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations.\n To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy/quality trade-offs.\n We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90% depending on the scenario) compared to a representative collection of existing approaches.",
"title": ""
},
{
"docid": "175fa180bc18a59dd6855d469aed91ec",
"text": "A new solution of the inverse kinematics task for a 3-DOF parallel manipulator with a R-P -S joint structure is obtained for a given position of end-effector in the form of simple position equations. Based on this the number of the inverse kinematics task solutions was investigated, in general, equal to four. We identify the size of the manipulator feasible area and simple relationships are found between the position and orientation of the platform. We prove a new theorem stating that, while the end-effector traces a circular horizontal path with its centre at the vertical z-axis, the norm of the joint coordinates vector remains constant.",
"title": ""
},
{
"docid": "c2c5f0f8b4647c651211b50411382561",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "e003dd850e8ca294a45e2bec122945c3",
"text": "In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level optimization approach to solve the model selection problem for linear and kernel SVMs, including the extension to learn several kernel parameters. Using this method, we can overcome the discretization of the parameter space using continuous optimization, and the complexity of the method only increases linearly with the number of parameters (instead of exponentially using grid search). In experiments, we determine optimal hyper-parameters based on different smooth estimates of the cross-validation error and find that only very few iterations of bi-level optimization yield good classification rates.",
"title": ""
},
{
"docid": "15e440bc952db5b0ad71617e509770b9",
"text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.",
"title": ""
},
{
"docid": "0b7f00dcdfdd1fe002b2363097914bba",
"text": "A new field of research, visual analytics, has been introduced. This has been defined as \"the science of analytical reasoning facilitated by interactive visual interfaces\" (Thomas and Cook, 2005). Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation, and dissemination. As researchers begin to develop visual analytic environments, it is advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work has on the users who work in such environments. This paper presents five areas or aspects of visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as the five evaluation areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined",
"title": ""
},
{
"docid": "1b4963cac3a0c3b0ae469f616b4295a8",
"text": "The volume of traveling websites is rapidly increasing. This makes relevant information extraction more challenging. Several fuzzy ontology-based systems have been proposed to decrease the manual work of a full-text query search engine and opinion mining. However, most search engines are keyword-based, and available full-text search engine systems are still imperfect at extracting precise information using different types of user queries. In opinion mining, travelers do not declare their hotel opinions entirely but express individual feature opinions in reviews. Hotel reviews have numerous uncertainties, and most featured opinions are based on complex linguistic wording (small, big, very good and very bad). Available ontology-based systems cannot extract blurred information from reviews to provide better solutions. To solve these problems, this paper proposes a new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE. The system reformulates the user’s full-text query to extract the user requirement and convert it into the format of a proper classical full-text search engine query. The proposed system retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology. The fuzzy domain ontology, user information and hotel information are integrated to form a type-2 fuzzy merged ontology for the retrieving of feature polarity and individual hotel polarity. The Protégé OWL-2 (Ontology Web Language) tool is used to develop the type-2 fuzzy ontology. A series of experiments were designed and demonstrated that T2FOBOMIE performance is highly productive for analyzing reviews and accurate opinion mining.",
"title": ""
},
{
"docid": "e900869aa26f7825878b394cbeb4bc92",
"text": "One of the central challenges of integrating game-based learning in school settings is helping learners make the connections between the knowledge learned in the game and the knowledge learned at school, while maintaining a high level of engagement with game narrative and gameplay. The current study evaluated the effect of supplementing a business simulation game with an external conceptual scaffold, which introduces formal knowledge representations, on learners’ ability to solve financial-mathematical word problems following the game, and on learners’ perceptions regarding learning, flow, and enjoyment in the game. Participants (Mage 1⁄4 10.10 years) were randomly assigned to three experimental conditions: a “study and play” condition that presented the scaffold first and then the game, a “play and study” condition, and a “play only” condition. Although no significant gains in problem-solving were found following the intervention, learners who studied with the external scaffold before the game performed significantly better in the post-game problem-solving assessment. Adding the external scaffold before the game reduced learners’ perceived learning. However, the scaffold did not have a negative impact on reported flow and enjoyment. Flow was found to significantly predict perceived learning and enjoyment. Yet, perceived learning and enjoyment did not predict problem-solving and flow directly predicted problem solving only in the “play and study” condition. We suggest that presenting the scaffold may have “problematized” learners’ understandings of the game by connecting them to disciplinary knowledge. Implications for the design of scaffolds for game-based learning are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "30f6e87625f9d293824e932b072aa95a",
"text": "This paper presents a method for combining domain knowledge and machine learning (CDKML) for classifier generation and online adaptation. The method exploits advantages in domain knowledge and machine learning as complementary information sources. While machine learning may discover patterns in interest domains that are too subtle for humans to detect, domain knowledge may contain information on a domain not present in the available domain dataset. CDKML has three steps. First, prior domain knowledge is enriched with relevant patterns obtained by machine learning to create an initial classifier. Second, genetic algorithms refine the classifier. Third, the classifier is adapted online based on user feedback using the Markov decision process. CDKML was applied in fall detection. Tests showed that the classifiers developed by CDKML have better performance than ML classifiers generated on a one-sided training dataset. The accuracy of the initial classifier was 10 percentage points higher than the best machine learning classifier and the refinement added 3 percentage points. The online adaptation improved the accuracy of the refined classifier by additional 15 percentage points.",
"title": ""
},
{
"docid": "25d63ac8bdd3bc3c6348566a63aef76c",
"text": "The mammalian intestine is home to a complex community of trillions of bacteria that are engaged in a dynamic interaction with the host immune system. Determining the principles that govern host–microbiota relationships is the focus of intense research. Here, we describe how the intestinal microbiota is able to influence the balance between pro-inflammatory and regulatory responses and shape the host's immune system. We suggest that improving our understanding of the intestinal microbiota has therapeutic implications, not only for intestinal immunopathologies but also for systemic immune diseases.",
"title": ""
},
{
"docid": "27a8a8313b8b5d9b69537a2f6b1cd18a",
"text": "Harmonic functions are solutions to Laplace's Equation. As noted in a previous paper, they can be used to advantage for potentialeld path planning, since they do not exhibit spurious local minima. In this paper, harmonic functions are shown to have a number of other properties (including completeness) which are essential to robotics applications. These properties strongly recommend harmonic functions as a mechanism for robot control.",
"title": ""
},
{
"docid": "8e8905e6ae4c4d6cd07afa157b253da9",
"text": "Blockchain technology enables the execution of collaborative business processes involving untrusted parties without requiring a central authority. Specifically, a process model comprising tasks performed by multiple parties can be coordinated via smart contracts operating on the blockchain. The consensus mechanism governing the blockchain thereby guarantees that the process model is followed by each party. However, the cost required for blockchain use is highly dependent on the volume of data recorded and the frequency of data updates by smart contracts. This paper proposes an optimized method for executing business processes on top of commodity blockchain technology. The paper presents a method for compiling a process model into a smart contract that encodes the preconditions for executing each task in the process using a space-optimized data structure. The method is empirically compared to a previously proposed baseline by replaying execution logs, including one from a real-life business process, and measuring resource consumption.",
"title": ""
},
{
"docid": "d2b6d875326b8147ffea279f1da26fc9",
"text": "This article discusses the psychology of cosmetic surgery. A review of the research on the psychological characteristics of individuals who seek cosmetic surgery yielded contradictory findings. Interview-based investigations revealed high levels of psychopathology in cosmetic surgery patients, whereas studies that used standardized measurements reported far less disturbance. It is difficult to fully resolve the discrepancy between these two sets of findings. We believe that investigating the construct of body image in cosmetic surgery patients will yield more useful findings. Thus, we propose a model of the relationship between body image dissatisfaction and cosmetic surgery and outline a research agenda based upon the model. Such research will generate information that is useful to the medical and mental health communities and, ultimately, the patients themselves.",
"title": ""
},
{
"docid": "316ea13d9bf9a64e71871e22e6073ef6",
"text": "Ride sharing allows to share costs of traveling by car, e.g., for fuel or highway tolls. Furthermore, it reduces congestion and emissions by making better use of vehicle capacities. Ride sharing is hence beneficial for drivers, riders, as well as society. While the concept has existed for decades, ubiquity of digital and mobile technology and user habituation to peer-to-peer services and electronic markets have resulted in particular growth in recent years. This paper explores the novel idea of multi-hop ride sharing and illustrates how Information Systems can leverage its potential. Based on empirical ride sharing data, we provide a quantitative analysis of the structure and the economics of electronic ride sharing markets. We explore the potential and competitiveness of multi-hop ride sharing and analyze its implications for platform operators. We find that multi-hop ride sharing proves competitive against other modes of transportation and has the potential to greatly increase ride availability and city connectedness, especially under high reliability requirements. To fully realize this potential, platform operators should implement multi-hop search, assume active control of pricing and booking processes, improve coordination of transfers, enhance data services, and try to expand their market share.",
"title": ""
},
{
"docid": "ca0d5a3f9571f288d244aee0b2c2f801",
"text": "This paper proposes, focusing on random forests, the increa singly used statistical method for classification and regre ssion problems introduced by Leo Breiman in 2001, to investigate two classi cal issues of variable selection. The first one is to find impor tant variables for interpretation and the second one is more rest rictive and try to design a good prediction model. The main co tribution is twofold: to provide some insights about the behavior of th e variable importance index based on random forests and to pr opose a strategy involving a ranking of explanatory variables usi ng the random forests score of importance and a stepwise asce nding variable introduction strategy.",
"title": ""
},
{
"docid": "46ea64a204ae93855676146d84063c1a",
"text": "PURPOSE\nThe present study examined the utility of 2 measures proposed as markers of specific language impairment (SLI) in identifying specific impairments in language or working memory in school-age children.\n\n\nMETHOD\nA group of 400 school-age children completed a 5-min screening consisting of nonword repetition and sentence recall. A subset of low (n = 52) and average (n = 38) scorers completed standardized tests of language, short-term and working memory, and nonverbal intelligence.\n\n\nRESULTS\nApproximately equal numbers of children were identified with specific impairments in either language or working memory. A group about twice as large had deficits in both language and working memory. Sensitivity of the screening measure for both SLI and specific working memory impairments was 84% or greater, although specificity was closer to 50%. Sentence recall performance below the 10th percentile was associated with sensitivity and specificity values above 80% for SLI.\n\n\nCONCLUSIONS\nDevelopmental deficits may be specific to language or working memory, or include impairments in both areas. Sentence recall is a useful clinical marker of SLI and combined language and working memory impairments.",
"title": ""
},
{
"docid": "a583c568e3c2184e5bda272422562a12",
"text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.",
"title": ""
},
{
"docid": "14c32ad3f68e38d4d1efb22ac32710e7",
"text": "It is known from clinical studies that some patients attempt to cope with the symptoms of post-traumatic stress disorder (PTSD) by using recreational drugs. This review presents a case report of a 19-year-old male patient with a spectrum of severe PTSD symptoms, such as intense flashbacks, panic attacks, and self-mutilation, who discovered that some of his major symptoms were dramatically reduced by smoking cannabis resin. The major part of this review is concerned with the clinical and preclinical neurobiological evidence in order to offer a potential explanation of these effects on symptom reduction in PTSD. This review shows that recent studies provided supporting evidence that PTSD patients may be able to cope with their symptoms by using cannabis products. Cannabis may dampen the strength or emotional impact of traumatic memories through synergistic mechanisms that might make it easier for people with PTSD to rest or sleep and to feel less anxious and less involved with flashback memories. The presence of endocannabinoid signalling systems within stress-sensitive nuclei of the hypothalamus, as well as upstream limbic structures (amygdala), point to the significance of this system for the regulation of neuroendocrine and behavioural responses to stress. Evidence is increasingly accumulating that cannabinoids might play a role in fear extinction and antidepressive effects. It is concluded that further studies are warranted in order to evaluate the therapeutic potential of cannabinoids in PTSD.",
"title": ""
},
{
"docid": "0604c1ed7ea5a57387d013a5f94f8c00",
"text": "Many current Internet services rely on inferences from models trained on user data. Commonly, both the training and inference tasks are carried out using cloud resources fed by personal data collected at scale from users. Holding and using such large collections of personal data in the cloud creates privacy risks to the data subjects, but is currently required for users to benefit from such services. We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks. Specifically, we take an initial model learnt from a small set of users and retrain it locally using data from a single user. We evaluate on two tasks: one supervised learning task, using a neural network to recognise users' current activity from accelerometer traces; and one unsupervised learning task, identifying topics in a large set of documents. In both cases the accuracy is improved. We also analyse the robustness of our approach against adversarial attacks, as well as its feasibility by presenting a performance evaluation on a representative resource-constrained device (a Raspberry Pi).",
"title": ""
}
] |
scidocsrr
|
b757e6effd0a6ac1b860669d62f0b730
|
Temporal Relational Ranking for Stock Prediction
|
[
{
"docid": "a13788dcda6ba9caa99e3b6b5dab73f9",
"text": "Our research examines a predictive machine learning approach for financial news articles analysis using several different textual representations: bag of words, noun phrases, and named entities. Through this approach, we investigated 9,211 financial news articles and 10,259,042 stock quotes covering the S&P 500 stocks during a five week period. We applied our analysis to estimate a discrete stock price twenty minutes after a news article was released. Using a support vector machine (SVM) derivative specially tailored for discrete numeric prediction and models containing different stock-specific variables, we show that the model containing both article terms and stock price at the time of article release had the best performance in closeness to the actual future stock price (MSE 0.04261), the same direction of price movement as the future price (57.1% directional accuracy) and the highest return using a simulated trading engine (2.06% return). We further investigated the different textual representations and found that a Proper Noun scheme performs better than the de facto standard of Bag of Words in all three metrics.",
"title": ""
},
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
}
] |
[
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "26095dbc82b68c32881ad9316256bc42",
"text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.",
"title": ""
},
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "ef98936202fea16571be47ee629b0955",
"text": "Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical topics like composition, decomposition, domains, and ranges of the induced translation classes. The extension with regular look-ahead is considered. 0 1985 Academic Press, Inc.",
"title": ""
},
{
"docid": "84499d49c5e2d7ed9f30b754329d5175",
"text": "The evolution of natural ecosystems is controled by a high level of biodiversity, In sharp contrast, intensive agricultural systems involve monocultures associated with high input of chemical fertilisers and pesticides. Intensive agricultural systems have clearly negative impacts on soil and water quality and on biodiversity conservation. Alternatively, cropping systems based on carefully designed species mixtures reveal many potential advantages under various conditions, both in temperate and tropical agriculture. This article reviews those potential advantages by addressing the reasons for mixing plant species; the concepts and tools required for understanding and designing cropping systems with mixed species; and the ways of simulating multispecies cropping systems with models. Multispecies systems are diverse and may include annual and perennial crops on a gradient of complexity from 2 to n species. A literature survey shows potential advantages such as (1) higher overall productivity, (2) better control of pests and diseases, (3) enhanced ecological services and (4) greater economic profitability. Agronomic and ecological conceptual frameworks are examined for a clearer understanding of cropping systems, including the concepts of competition and facilitation, above- and belowground interactions and the types of biological interactions between species that enable better pest management in the system. After a review of existing models, future directions in modelling plant mixtures are proposed. We conclude on the need to enhance agricultural research on these multispecies systems, combining both agronomic and ecological concepts and tools.",
"title": ""
},
{
"docid": "192e1bd5baa067b563edb739c05decfa",
"text": "This paper presents a simple and accurate design methodology for LLC resonant converters, based on a semi- empirical approach to model steady-state operation in the \"be- low-resonance\" region. This model is framed in a design strategy that aims to design a converter capable of operating with soft-switching in the specified input voltage range with a load ranging from zero up to the maximum specified level.",
"title": ""
},
{
"docid": "4fa13d98d3d4347b4759a334e9e6298e",
"text": "OBJECTIVE\nTo present estimates of the lifetime prevalence of DSM-IV mental disorders with and without severe impairment, their comorbidity across broad classes of disorder, and their sociodemographic correlates.\n\n\nMETHOD\nThe National Comorbidity Survey-Adolescent Supplement NCS-A is a nationally representative face-to-face survey of 10,123 adolescents aged 13 to 18 years in the continental United States. DSM-IV mental disorders were assessed using a modified version of the fully structured World Health Organization Composite International Diagnostic Interview.\n\n\nRESULTS\nAnxiety disorders were the most common condition (31.9%), followed by behavior disorders (19.1%), mood disorders (14.3%), and substance use disorders (11.4%), with approximately 40% of participants with one class of disorder also meeting criteria for another class of lifetime disorder. The overall prevalence of disorders with severe impairment and/or distress was 22.2% (11.2% with mood disorders, 8.3% with anxiety disorders, and 9.6% behavior disorders). The median age of onset for disorder classes was earliest for anxiety (6 years), followed by 11 years for behavior, 13 years for mood, and 15 years for substance use disorders.\n\n\nCONCLUSIONS\nThese findings provide the first prevalence data on a broad range of mental disorders in a nationally representative sample of U.S. adolescents. Approximately one in every four to five youth in the U.S. meets criteria for a mental disorder with severe impairment across their lifetime. The likelihood that common mental disorders in adults first emerge in childhood and adolescence highlights the need for a transition from the common focus on treatment of U.S. youth to that of prevention and early intervention.",
"title": ""
},
{
"docid": "5e24b62458331cf88e9e606ae0b381b6",
"text": "People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a \"second-order\" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record",
"title": ""
},
{
"docid": "226f84ed038a4509d9f3931d7df8b977",
"text": "Physically Asynchronous/Logically Synchronous (PALS) is an architecture pattern that allows developers to design and verify a system as though all nodes executed synchronously. The correctness of PALS protocol was formally verified. However, the implementation of PALS adds additional code that is otherwise not needed. In our case, we have a middleware (PALSWare) that supports PALS systems. In this paper, we introduce a verification framework that shows how we can apply Software Model Checking (SMC) to verify a PALS system at the source code level. SMC is an automated and exhaustive source code checking technology. Compared to verifying (hardware or software) models, verifying the actual source code is more useful because it minimizes any chance of false interpretation and eliminates the possibility of missing software bugs that were absent in the model but introduced during implementation. In other words, SMC reduces the semantic gap between what is verified and what is executed. Our approach is compositional, i.e., the verification of PALSWare is done separately from applications. Since PALSWare is inherently concurrent, to verify it via SMC we must overcome the statespace explosion problem, which arises from concurrency and asynchrony. To this end, we develop novel simplification abstractions, prove their soundness, and then use these abstractions to reduce the verification of a system with many threads to verifying a system with a relatively small number of threads. When verifying an application, we leverage the (already verified) synchronicity guarantees provided by the PALSWare to reduce the verification complexity significantly. Thus, our approach uses both “abstraction” and “composition”, the two main techniques to reduce statespace explosion. This separation between verification of PALSWare and applications also provides better management against upgrades to either. We validate our approach by verifying the current PALSWare implementation, and several PALSWare-based distributed real time applications.",
"title": ""
},
{
"docid": "af9f4dc24ca90a884ca85e94daa2547e",
"text": "Congenital web neck is a deformity hardly ever reported in the English literature. It is usually associated to Ulrrich-Turner syndrome. There are several options to correct this deformity, but in severe cases complete correction of the web and the abnormal back hair is not always possible. We present our experience with a secondary case where previous butterfly method was employed, a combined procedure was used achieving a satisfactory result. We considered that this technique is useful and offers an important improvement of the contour.",
"title": ""
},
{
"docid": "8e6677e03f964984e87530afad29aef3",
"text": "University of Jyväskylä, Department of Computer Science and Information Systems, PO Box 35, FIN-40014, Finland; Agder University College, Department of Information Systems, PO Box 422, 4604, Kristiansand, Norway; University of Toronto, Faculty of Information Studies, 140 St. George Street, Toronto, ON M5S 3G6, Canada; University of Oulu, Department of Information Processing Science, University of Oulu, PO Box 3000, FIN-90014, Finland Abstract Innovations in network technologies in the 1990’s have provided new ways to store and organize information to be shared by people and various information systems. The term Enterprise Content Management (ECM) has been widely adopted by software product vendors and practitioners to refer to technologies used to manage the content of assets like documents, web sites, intranets, and extranets In organizational or inter-organizational contexts. Despite this practical interest ECM has received only little attention in the information systems research community. This editorial argues that ECM provides an important and complex subfield of Information Systems. It provides a framework to stimulate and guide future research, and outlines research issues specific to the field of ECM. European Journal of Information Systems (2006) 15, 627–634. doi:10.1057/palgrave.ejis.3000648",
"title": ""
},
{
"docid": "d49825f64cda7772717d6e1f9c40d002",
"text": "The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.",
"title": ""
},
{
"docid": "228a777c356591c4d1944e645c04a106",
"text": "Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.",
"title": ""
},
{
"docid": "f4be6b2bf1cd462ec758fe37b098eef1",
"text": "Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in nonstationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.",
"title": ""
},
{
"docid": "54d61b3720be1a6a4aa236a51af72e0d",
"text": "In 2008 Bitcoin was introduced as the first decentralized electronic cash system and it has seen widespread adoption since it became fully functional in 2009. This thesis describe the Bitcoin system, anonymity aspects for Bitcoin and how we can use cryptography to improve anonymity by a scheme called Zerocoin. The Bitcoin system will be described with focus on transactions and the blockchain where all transactions are recorded. We look more closely into anonymity in terms of address unlinkability and illustrate how the anonymity provided is insufficient by clustering addresses. Further we describe Zerocoin, a decentralized electronic cash scheme designed to cryptographically improve the anonymity guarantees in Bitcoin by breaking the link between individual Bitcoin transactions. We detail the construction of Zerocoin, provide security analysis and describe how it integrates into Bitcoin.",
"title": ""
},
{
"docid": "5640d9307fa3d1b611358d3f14d5fb4c",
"text": "An N-LDMOS ESD protection device with drain back and PESD optimization design is proposed. With PESD layer enclosing the N+ drain region, a parasitic SCR is created to achieve high ESD level. When PESD is close to gate, the turn-on efficiency can be further improved (Vt1: 11.2 V reduced to 7.2 V) by the punch-through path from N+/PESD to PW. The proposed ESD N-LDMOS can sustain over 8KV HBM with low trigger behavior without extra area cost.",
"title": ""
},
{
"docid": "9fdb04de801698a56ebb9acf80e15109",
"text": "To cope with the increasing difference between processor and main memory speeds, modern computer systems use deep memory hierarchies. In the presence of such hierarchies, the performance attained by an application is largely determined by its memory reference behavior—if most references hit in the cache, the performance is significantly higher than if most references have to go to main memory. Frequently, it is possible for the programmer to restructure the data or code to achieve better memory reference behavior. Unfortunately, most existing performance debugging tools do not assist the programmer in this component of the overall performance tuning task.\nThis paper describes MemSpy, a prototype tool that helps programmers identify and fix memory bottlenecks in both sequential and parallel programs. A key aspect of MemSpy is that it introduces the notion of data oriented, in addition to code oriented, performance tuning. Thus, for both source level code objects and data objects, MemSpy provides information such as cache miss rates, causes of cache misses, and in multiprocessors, information on cache invalidations and local versus remote memory misses. MemSpy also introduces a concise matrix presentation to allow programmers to view both code and data oriented statistics at the same time. This paper presents design and implementation issues for MemSpy, and gives a detailed case study using MemSpy to tune a parallel sparse matrix application. It shows how MemSpy helps pinpoint memory system bottlenecks, such as poor spatial locality and interference among data structures, and suggests paths for improvement.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3",
"text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "d8a6dd65e7b0af45466aba2d7dcff317",
"text": "The aim of this paper is to analyze advanced solar dynamic space power systems for electrical space power generation. Space-based solar power [1] (SBSP) is a system for the collection of solar power in space, to meet the ever increasing demand for energy on Earth. SBSP differs from the usual method of solar power collection in the Earth. At the earth based solar power collection, array of panels are placed in the ground facing the sun, which collects sun’s energy during the day-time alone. In SBSP huge solar panels are fitted in the large satellite which collects the entire solar energy present in orbit and beams it down to Earth. In space, the collection of Sun’s energy is unaffected by the day/night cycle, weather, seasonal changes and the filtering effect of Earth’s atmospheric gases. A major interest in SBSP stems from the fact that solar collection panels can consistently be exposed to a high amount of solar radiation. SBSP offers a complete displacement of fossil fuel, nuclear and biological sources of energy. It is the only energy technology that is clean, renewable, constant and capable of providing power to virtually any location on Earth. KeywordsSpace-based solar power (SBSP), Solar power satellite (SPS), Rectifying Antenna (Rectanna)",
"title": ""
}
] |
scidocsrr
|
008e4caf64e9d155ec29e8b7ce4f2aaf
|
Effective summarization method of text documents
|
[
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
}
] |
[
{
"docid": "f66609f826cae05b1b330f138c6e556a",
"text": "We describe pke, an open source python-based keyphrase extraction toolkit. It provides an end-to-end keyphrase extraction pipeline in which each component can be easily modified or extented to develop new approaches. pke also allows for easy benchmarking of state-of-the-art keyphrase extraction approaches, and ships with supervised models trained on the SemEval-2010 dataset (Kim et al., 2010).",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "c694936a9b8f13654d06b72c077ed8f4",
"text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.",
"title": ""
},
{
"docid": "5bbd4675eb1b408895f29340c3cd074a",
"text": "We performed underground real-time tests to obtain alpha particle-induced soft error rates (α-SER) with high accuracies for SRAMs with 180 nm – 90 nm technologies and studied the scaling trend of α-SERs. In order to estimate the maximum permissive rate of alpha emission from package resin, the α-SER was compared to the neutron-induced soft error rate (n-SER) obtained from accelerated tests. We found that as devices are scaled down, the α-SER increased while the n-SER slightly decreased, and that the α-SER could be greater than the n-SER in 90 nm technology even when the ultra-low-alpha (ULA) grade, with the alpha emission rate ≫ 1 × 10<sup>−3</sup> cm<sup>−2</sup>h<sup>−1</sup>, was used for package resin. We also performed computer simulations to estimate scaling trends of both α-SER and n-SER up to 45 nm technologies, and noticed that the α-SER decreased from 65 nm technology while the n-SER increased from 45 nm technology due to direct ionization from the protons generated in the n + Si nuclear reaction.",
"title": ""
},
{
"docid": "de38fa4dc01bd1ef779f377cfcbc52f7",
"text": "Like all software, mobile applications (\"apps\") must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: ease of use, ability to work on multiple platforms, code coverage, and ability to detect faults. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android.",
"title": ""
},
{
"docid": "7240d65e0bc849a569d840a461157b2c",
"text": "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.",
"title": ""
},
{
"docid": "884c269755bb19bd92e1add39156914a",
"text": "Stress is a well-known risk factor in the development of addiction and in addiction relapse vulnerability. A series of population-based and epidemiological studies have identified specific stressors and individual-level variables that are predictive of substance use and abuse. Preclinical research also shows that stress exposure enhances drug self-administration and reinstates drug seeking in drug-experienced animals. The deleterious effects of early life stress, child maltreatment, and accumulated adversity on alterations in the corticotropin releasing factor and hypothalamic-pituitary-adrenal axis (CRF/HPA), the extrahypothalamic CRF, the autonomic arousal, and the central noradrenergic systems are also presented. The effects of these alterations on the corticostriatal-limbic motivational, learning, and adaptation systems that include mesolimbic dopamine, glutamate, and gamma-amino-butyric acid (GABA) pathways are discussed as the underlying pathophysiology associated with stress-related risk of addiction. The effects of regular and chronic drug use on alterations in these stress and motivational systems are also reviewed, with specific attention to the impact of these adaptations on stress regulation, impulse control, and perpetuation of compulsive drug seeking and relapse susceptibility. Finally, research gaps in furthering our understanding of the association between stress and addiction are presented, with the hope that addressing these unanswered questions will significantly influence new prevention and treatment strategies to address vulnerability to addiction.",
"title": ""
},
{
"docid": "5350af2d42f9321338e63666dcd42343",
"text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.",
"title": ""
},
{
"docid": "f99fe9c7aaf417a3893c264b2602a9f3",
"text": "A male infant was brought to hospital aged eight weeks. He was born at full term via normal vaginal home delivery without any complications. The delivery was conducted by a traditional birth attendant and Apgar scores at birth were unrecorded. One week after the birth, the parents noticed an increase in size of the baby’s breasts. In accordance with cultural practice, they massaged the breasts in order to express milk, hoping that by doing so the size of the breasts would return to normal. However, the size of the breasts increased. They also reported that milk was being discharged spontaneously through the nipples. There was no history of drug intake neither by the mother nor the baby. The infant appeared clinically well and showed no signs of irritability. On examination, bilateral breast enlargement was observed of approximate diameter 6 cm. No tenderness, purulent discharge or any sign of inflammation were observed (Figure 1). Systemic and genital examination were unremarkable. Routine blood investigations were normal. Firm advice was given not to massage the breasts of the baby.",
"title": ""
},
{
"docid": "82857fedec78e8317498e3c66268d965",
"text": "In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.",
"title": ""
},
{
"docid": "18f877aff5ed5cc5711d92089e4c8d3e",
"text": "The purpose of this paper is twofold: ( i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and ( ii) we argue that natural language, which is the best known theory of our (shared) commo nsense knowledge, should itself be used as a guide to discovering the structure o f commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as me t phor, intensionality, and the semantics of nominal compounds. Admittedl y, our ultimate goal is quite ambitious, and it is no less than the systematic ‘dis overy’ of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra.",
"title": ""
},
{
"docid": "d1ad10c873fd5a02d1ce072b4ffc788c",
"text": "Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.",
"title": ""
},
{
"docid": "5b0530f94f476754034c92292e02b390",
"text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two",
"title": ""
},
{
"docid": "aa2ddbfc3bb1aa854d1c576927dc2d30",
"text": "B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.",
"title": ""
},
{
"docid": "e6d5f3c9a58afcceae99ff522d6dfa81",
"text": "Strategic information systems planning (SISP) is a key concern facing top business and information systems executives. Observers have suggested that both too little and too much SISP can prove ineffective. Hypotheses examine the expected relationship between comprehensiveness and effectiveness in five SISP planning phases. They predict a nonlinear, inverted-U relationship thus suggesting the existence of an optimal level of comprehensiveness. A survey collected data from 161 US information systems executives. After an extensive validation of the constructs, the statistical analysis supported the hypothesis in a Strategy Implementation Planning phase, but not in terms of the other four SISP phases. Managers may benefit from the knowledge that both too much and too little implementation planning may hinder SISP success. Future researchers should investigate why the hypothesis was supported for that phase, but not the others. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4d1dfdfa04b60f1e649d5f234e8b417f",
"text": "One way hash functions are a major tool in cryptography. DES is the best known and most widely used encryption function in the commercial world today. Generating a one-way hash function which is secure if DES is a “good” block cipher would therefore be useful. We show three such functions which are secure if DES is a good random block cipher.",
"title": ""
},
{
"docid": "1eb415cae9b39655849537cdc007f51f",
"text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.",
"title": ""
},
{
"docid": "959ba9c0929e36a8ef4a22a455ed947a",
"text": "The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.",
"title": ""
},
{
"docid": "42e7083e287bebc0a8bde367e4d4b352",
"text": "This paper proposes a framework for security services using Software-Defined Networking (SDN) and Interface to Network Security Functions (I2NSF). It specifies requirements for such a framework for security services based on network virtualization. It describes two representative security systems, such as (i) centralized firewall system and (ii) DDoS-attack mitigation system. For each service, this paper discusses the limitations of existing systems and presents a possible SDN-based system to protect network resources by controlling suspicious and dangerous network traffic.",
"title": ""
}
] |
scidocsrr
|
e4c19c7df02bdc0ed409bdf36d5d8066
|
Self-Presentation and Deception Looks and Lies : The Role of Physical Attractiveness in Online Dating
|
[
{
"docid": "62c93d1c3033208a609e4fc14a42a493",
"text": "Evolutionary-related hypotheses about gender differences in mate selection preferences were derived from Triver's parental investment model, which contends that women are more likely than men to seek a mate who possesses nonphysical characteristics that maximize the survival or reproductive prospects of their offspring, and were examined in a meta-analysis of mate selection research (questionnaire studies, analyses of personal advertisements). As predicted, women accorded more weight than men to socioeconomic status, ambitiousness, character, and intelligence, and the largest gender differences were observed for cues to resource acquisition (status, ambitiousness). Also as predicted, gender differences were not found in preferences for characteristics unrelated to progeny survival (sense of humor, \"personality\"). Where valid comparisons could be made, the findings were generally invariant across generations, cultures, and research paradigms.",
"title": ""
},
{
"docid": "51a859f71bd2ec82188826af18204f02",
"text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.",
"title": ""
},
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
},
{
"docid": "7440cb90073c8d8d58e28447a1774b2c",
"text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.",
"title": ""
}
] |
[
{
"docid": "affbc18a3ba30c43959e37504b25dbdc",
"text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.",
"title": ""
},
{
"docid": "a5d16384d928da7bcce7eeac45f59e2e",
"text": "Innovative rechargeable batteries that can effectively store renewable energy, such as solar and wind power, urgently need to be developed to reduce greenhouse gas emissions. All-solid-state batteries with inorganic solid electrolytes and electrodes are promising power sources for a wide range of applications because of their safety, long-cycle lives and versatile geometries. Rechargeable sodium batteries are more suitable than lithium-ion batteries, because they use abundant and ubiquitous sodium sources. Solid electrolytes are critical for realizing all-solid-state sodium batteries. Here we show that stabilization of a high-temperature phase by crystallization from the glassy state dramatically enhances the Na(+) ion conductivity. An ambient temperature conductivity of over 10(-4) S cm(-1) was obtained in a glass-ceramic electrolyte, in which a cubic Na(3)PS(4) crystal with superionic conductivity was first realized. All-solid-state sodium batteries, with a powder-compressed Na(3)PS(4) electrolyte, functioned as a rechargeable battery at room temperature.",
"title": ""
},
{
"docid": "98d766b3756d1fe6634996fd91169c19",
"text": "Kratom (Mitragyna speciosa) is a widely abused herbal drug preparation in Southeast Asia. It is often consumed as a substitute for heroin, but imposing itself unknown harms and addictive burdens. Mitragynine is the major psychostimulant constituent of kratom that has recently been reported to induce morphine-like behavioural and cognitive effects in rodents. The effects of chronic consumption on non-drug related behaviours are still unclear. In the present study, we investigated the effects of chronic mitragynine treatment on spontaneous activity, reward-related behaviour and cognition in mice in an IntelliCage® system, and compared them with those of morphine and Δ-9-tetrahydrocannabinol (THC). We found that chronic mitragynine treatment significantly potentiated horizontal exploratory activity. It enhanced spontaneous sucrose preference and also its persistence when the preference had aversive consequences. Furthermore, mitragynine impaired place learning and its reversal. Thereby, mitragynine effects closely resembled that of morphine and THC sensitisation. These findings suggest that chronic mitragynine exposure enhances spontaneous locomotor activity and the preference for natural rewards, but impairs learning and memory. These findings confirm pleiotropic effects of mitragynine (kratom) on human lifestyle, but may also support the recognition of the drug's harm potential.",
"title": ""
},
{
"docid": "3ab85b8f58e60f4e59d6be49648ce290",
"text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.",
"title": ""
},
{
"docid": "9f8e9c5e617db7f4281f0a20f5527c70",
"text": "We have developed a normally-off GaN-based transistor using conductivity modulation, which we call a gate injection transistor (GIT). This new device principle utilizes hole-injection from the p-AlGaN to the AlGaN/GaN heterojunction, which simultaneously increases the electron density in the channel, resulting in a dramatic increase of the drain current owing to the conductivity modulation. The fabricated GIT exhibits a threshold voltage of 1.0 V with a maximum drain current of 200 mA/mm, in which a forward gate voltage of up to 6 V can be applied. The obtained specific ON-state resistance (RON . A) and the OFF-state breakdown voltage (BV ds) are 2.6 mOmega . cm2 and 800 V, respectively. The developed GIT is advantageous for power switching applications.",
"title": ""
},
{
"docid": "00c19e68020aff7fd86aa7e514cc0668",
"text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.",
"title": ""
},
{
"docid": "b11331341448f108fb1b503ab8ecd7b8",
"text": "Repairing defects of the auricle requires an appreciation of the underlying 3-dimensional framework, the flexible properties of the cartilages, and the healing contractile tendencies of the surrounding soft tissue. In the analysis of auricular defects and planning of their reconstruction, it is helpful to divide the auricle into subunits for which different techniques may offer better functional and aesthetic outcomes. This article reviews many of the reconstructive options for defects of the various auricular subunits.",
"title": ""
},
{
"docid": "2f9de2e94c6af95e9c2e9eb294a7696c",
"text": "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.",
"title": ""
},
{
"docid": "e43242ed17a0b2fa9fca421179135ce1",
"text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",
"title": ""
},
{
"docid": "b7b2f1c59dfc00ab6776c6178aff929c",
"text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.",
"title": ""
},
{
"docid": "18dd421bb233c1de8dd56674bacfe521",
"text": "The coordination of directional overcurrent relays (DOCR) is treated in this paper using particle swarm optimization (PSO), a recently proposed optimizer that utilizes the swarm behavior in searching for an optimum. PSO gained a lot of interest for its simplicity, robustness, and easy implementation. The problem of setting DOCR is a highly constrained optimization problem that has been stated and solved as a linear programming (LP) problem. To deal with such constraints a modification to the standard PSO algorithm is introduced. Three case studies are presented, and the results are compared to those of LP technique to demonstrate the effectiveness of the proposed methodology.",
"title": ""
},
{
"docid": "6b6099ee6f04f1b490b7e483de3087ff",
"text": "International Electrotechnical Commission (IEC) standard 61850 proposes the Ethernet-based communication networks for protection and automation within the power substation. Major manufacturers are currently developing products for the process bus in compliance with IEC 61850 part 9-2. For the successful implementation of the IEC 61850-9-2 process bus, it is important to analyze the performance of time-critical messages for the substation protection and control functions. This paper presents the performance evaluation of the IEC 61850-9-2 process bus for a typical 345 kV/230 kV substation by studying the time-critical sampled value messages delay and loss by using the OPNET simulation tool in the first part of this paper. In the second part, this paper presents a corrective measure to address the issues with the several sampled value messages lost and/or delayed by proposing the sampled value estimation algorithm for any digital substation relaying. Finally, the proposed sampled value estimation algorithm has been examined for various power system scenarios with the help of PSCAD/EMTDC and MATLAB simulation tools.",
"title": ""
},
{
"docid": "e4944af5f589107d1b42a661458fcab5",
"text": "This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014. Mobile Augmented Reality has largely evolved over the last decade, as well as the interpretation itself of what is Mobile Augmented Reality. The first instance of Mobile AR can certainly be associated with the development of wearable AR, in a sense of experiencing AR during locomotion (mobile as a motion). With the transformation and miniaturization of physical devices and displays, the concept of mobile AR evolved towards the notion of ”mobile device”, aka AR on a mobile device. In this history of mobile AR we considered both definitions and the evolution of the term over time. Major parts of the list were initially compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2009 (author list in alphabetical order) for the ISMAR society. More recent work was added in 2013 and during preparation of this report. Permission is granted to copy and modify. Please email the first author if you find any errors.",
"title": ""
},
{
"docid": "5af83f822ac3d9379c7b477ff1d32a97",
"text": "Sprout is an end-to-end transport protocol for interactive applications that desire high throughput and low delay. Sprout works well over cellular wireless networks, where link speeds change dramatically with time, and current protocols build up multi-second queues in network gateways. Sprout does not use TCP-style reactive congestion control; instead the receiver observes the packet arrival times to infer the uncertain dynamics of the network path. This inference is used to forecast how many bytes may be sent by the sender, while bounding the risk that packets will be delayed inside the network for too long. In evaluations on traces from four commercial LTE and 3G networks, Sprout, compared with Skype, reduced self-inflicted end-to-end delay by a factor of 7.9 and achieved 2.2× the transmitted bit rate on average. Compared with Google’s Hangout, Sprout reduced delay by a factor of 7.2 while achieving 4.4× the bit rate, and compared with Apple’s Facetime, Sprout reduced delay by a factor of 8.7 with 1.9× the bit rate. Although it is end-to-end, Sprout matched or outperformed TCP Cubic running over the CoDel active queue management algorithm, which requires changes to cellular carrier equipment to deploy. We also tested Sprout as a tunnel to carry competing interactive and bulk traffic (Skype and TCP Cubic), and found that Sprout was able to isolate client application flows from one another.",
"title": ""
},
{
"docid": "66423bc00bb724d1d0c616397d898dd0",
"text": "Background\nThere is a growing trend for patients to seek the least invasive treatments with less risk of complications and downtime for facial rejuvenation. Thread embedding acupuncture has become popular as a minimally invasive treatment. However, there is little clinical evidence in the literature regarding its effects.\n\n\nMethods\nThis single-arm, prospective, open-label study recruited participants who were women aged 40-59 years, with Glogau photoaging scale III-IV. Fourteen participants received thread embedding acupuncture one time and were measured before and after 1 week from the procedure. The primary outcome was a jowl to subnasale vertical distance. The secondary outcomes were facial wrinkle distances, global esthetic improvement scale, Alexiades-Armenakas laxity scale, and patient-oriented self-assessment scale.\n\n\nResults\nFourteen participants underwent thread embedding acupuncture alone, and 12 participants revisited for follow-up outcome measures. For the primary outcome measure, both jowls were elevated in vertical height by 1.87 mm (left) and 1.43 mm (right). Distances of both melolabial and nasolabial folds showed significant improvement. In the Alexiades-Armenakas laxity scale, each evaluator evaluated for four and nine participants by 0.5 grades improved. In the global aesthetic improvement scale, improvement was graded as 1 and 2 in nine and five cases, respectively. The most common adverse events were mild bruising, swelling, and pain. However, adverse events occurred, although mostly minor and of short duration.\n\n\nConclusion\nIn this study, thread embedding acupuncture showed clinical potential for facial wrinkles and laxity. However, further large-scale trials with a controlled design and objective measurements are needed.",
"title": ""
},
{
"docid": "5be35d2aa81cc1e15b857892f376fbf0",
"text": "This paper proposes a new method for fabric defect classification by incorporating the design of a wavelet frames based feature extractor with the design of a Euclidean distance based classifier. Channel variances at the outputs of the wavelet frame decomposition are used to characterize each nonoverlapping window of the fabric image. A feature extractor using linear transformation matrix is further employed to extract the classification-oriented features. With a Euclidean distance based classifier, each nonoverlapping window of the fabric image is then assigned to its corresponding category. Minimization of the classification error is achieved by incorporating the design of the feature extractor with the design of the classifier based on minimum classification error (MCE) training method. The proposed method has been evaluated on the classification of 329 defect samples containing nine classes of fabric defects, and 328 nondefect samples, where 93.1% classification accuracy has been achieved.",
"title": ""
},
{
"docid": "f93dac471e3d7fa79c740b35fbde0558",
"text": "In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsu-pervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.",
"title": ""
},
{
"docid": "14dc7c8065adad3fc3c67f5a8e35298b",
"text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions",
"title": ""
},
{
"docid": "ed23845ded235d204914bd1140f034c3",
"text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao (yulingjiaomath@whu.edu.cn) †Can Yang (macyang@ust.hk) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9",
"title": ""
},
{
"docid": "503101a7b0f923f8fecb6dc9bb0bde37",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
}
] |
scidocsrr
|
f28662555a0c4bea946168cb47ac0b27
|
High-Performance Neural Networks for Visual Object Classification
|
[
{
"docid": "27ad413fa5833094fb2e557308fa761d",
"text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.",
"title": ""
},
{
"docid": "0a3f5ff37c49840ec8e59cbc56d31be2",
"text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.",
"title": ""
}
] |
[
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "da1d1e9ddb5215041b9565044b9feecb",
"text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.",
"title": ""
},
{
"docid": "460238e247fc60b0ca300ba9caafdc97",
"text": "Time-resolved optical spectroscopy is widely used to study vibrational and electronic dynamics by monitoring transient changes in excited state populations on a femtosecond timescale. Yet the fundamental cause of electronic and vibrational dynamics—the coupling between the different energy levels involved—is usually inferred only indirectly. Two-dimensional femtosecond infrared spectroscopy based on the heterodyne detection of three-pulse photon echoes has recently allowed the direct mapping of vibrational couplings, yielding transient structural information. Here we extend the approach to the visible range and directly measure electronic couplings in a molecular complex, the Fenna–Matthews–Olson photosynthetic light-harvesting protein. As in all photosynthetic systems, the conversion of light into chemical energy is driven by electronic couplings that ensure the efficient transport of energy from light-capturing antenna pigments to the reaction centre. We monitor this process as a function of time and frequency and show that excitation energy does not simply cascade stepwise down the energy ladder. We find instead distinct energy transport pathways that depend sensitively on the detailed spatial properties of the delocalized excited-state wavefunctions of the whole pigment–protein complex.",
"title": ""
},
{
"docid": "486dae23f5a7b19cf8c20fab60de6b0f",
"text": "Histopathological alterations induced by paraquat in the digestive gland of the freshwater snail Lymnaea luteola were investigated. Samples were collected from the Kondakarla lake (Visakhapatnam, Andhra Pradesh, India), where agricultural activities are widespread. Acute toxicity of series of concentration of paraquat to Lymnaea luteola was determined by recording snail mortality of 24, 48, 72 and 96 hrs exposures. The Lc50 value based on probit analysis was found to be 0.073 ml/L for 96 hrs of exposure to the herbicide. Results obtained shown that there were no mortality of snail either in control and those exposed to 0.0196 ml/L paraquat throughout the 96 hrs 100% mortality was recorded with 48hrs on exposed to 0.790 ppm concentration of stock solution of paraquat. At various concentrations paraquat causes significant dose dependent histopathological changes in the digestive gland of L.luteola. The histopathological examinations revealed the following changes: amebocytes infiltrations, the lumen of digestive gland tubule was shrunken; degeneration of cells, secretory cells became irregular, necrosis of cells and atrophy in the connective tissue of digestive gland.",
"title": ""
},
{
"docid": "ab7663ef08505e37be080eab491d2607",
"text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.",
"title": ""
},
{
"docid": "d390b0e5b1892297af37659fb92c03b5",
"text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.",
"title": ""
},
{
"docid": "7374e16190e680669f76fc7972dc3975",
"text": "Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.",
"title": ""
},
{
"docid": "61309b5f8943f3728f714cd40f260731",
"text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e4a1200b7f8143b1322c8a66d625d842",
"text": "This paper examines the spatial patterns of unemployment in Chicago between 1980 and 1990. We study unemployment clustering with respect to different social and economic distance metrics that reßect the structure of agents social networks. SpeciÞcally, we use physical distance, travel time, and differences in ethnic and occupational distribution between locations. Our goal is to determine whether our estimates of spatial dependence are consistent with models in which agents employment status is affected by information exchanged locally within their social networks. We present non-parametric estimates of correlation across Census tracts as a function of each distance metric as well as pairs of metrics, both for unemployment rate itself and after conditioning on a set of tract characteristics. Our results indicate that there is a strong positive and statistically signiÞcant degree of spatial dependence in the distribution of raw unemployment rates, for all our metrics. However, once we condition on a set of covariates, most of the spatial autocorrelation is eliminated, with the exception of physical and occupational distance. Racial and ethnic composition variables are the single most important factor in explaining the observed correlation patterns.",
"title": ""
},
{
"docid": "78e561cfb2578cc9d5634f008a4e6c7e",
"text": "The TCP transport layer protocol is designed for connections that traverse a single path between the sender and receiver. However, there are several environments in which multiple paths can be used by a connection simultaneously. In this paper we consider the problem of supporting striped connections that operate over multiple paths. We propose an end-to-end transport layer protocol called pTCP that allows connections to enjoy the aggregate bandwidths offered by the multiple paths, irrespective of the individual characteristics of the paths. We show that pTCP can have a varied range of applications through instantiations in three different environments: (a) bandwidth aggregation on multihomed mobile hosts, (b) service differentiation using purely end-to-end mechanisms, and (c) end-systems based network striping. In each of the applications we demonstrate the applicability of pTCP and how its efficacy compares with existing approaches through simulation results.",
"title": ""
},
{
"docid": "42d3adba03f835f120404cfe7571a532",
"text": "This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.",
"title": ""
},
{
"docid": "16cac565c6163db83496c41ea98f61f9",
"text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.",
"title": ""
},
{
"docid": "c01fbc8bd278b06e0476c6fbffca0ad1",
"text": "Memristors can be optimally used to implement logic circuits. In this paper, a logic circuit based on Memristor Ratioed Logic (MRL) is proposed. Specifically, a hybrid CMOS-memristive logic family by a suitable combination of 4 memristor and a complementary inverter CMOS structure is presented. The proposed structure by having outputs of AND, OR and XOR gates of inputs at the same time, reducing the area and connections and fewer power consumption can be appropriate for implementation of more complex circuits. Circuit design of a single-bit Full Adder is considered as a case study. The Full Adder proposed is implemented using 10 memristors and 4 transistors comparing to 18 memristors and 8 transistors in the other related work.",
"title": ""
},
{
"docid": "b44d6d71650fc31c643ac00bd45772cd",
"text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.",
"title": ""
},
{
"docid": "45faf47f5520a4f21719f5169334aabb",
"text": "Many dynamic-content online services are comprised of multiple interacting components and data partitions distributed across server clusters. Understanding the performance of these services is crucial for efficient system management. This paper presents a profile-driven performance model for cluster-based multi-component online services. Our offline constructed application profiles characterize component resource needs and inter-component communications. With a given component placement strategy, the application profile can be used to predict system throughput and average response time for the online service. Our model differentiates remote invocations from fast-path calls between co-located components and we measure the network delay caused by blocking inter-component communications. Validation with two J2EE-based online applications show that our model can predict application performance with small errors (less than 13% for throughput and less than 14% for the average response time). We also explore how this performance model can be used to assist system management functions for multi-component online services, with case examinations on optimized component placement, capacity planning, and cost-effectiveness analysis.",
"title": ""
},
{
"docid": "ea7acc555f2cb2de898a3706c31006db",
"text": "Securing the supply chain of integrated circuits is of utmost importance to computer security. In addition to counterfeit microelectronics, the theft or malicious modification of designs in the foundry can result in catastrophic damage to critical systems and large projects. In this letter, we describe a 3-D architecture that splits a design into two separate tiers: one tier that contains critical security functions is manufactured in a trusted foundry; another tier is manufactured in an unsecured foundry. We argue that a split manufacturing approach to hardware trust based on 3-D integration is viable and provides several advantages over other approaches.",
"title": ""
},
{
"docid": "103f4ff03cc1aef7c173b36ccc33e680",
"text": "Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.",
"title": ""
},
{
"docid": "1cb5a2d9abde060ba4f004fac84ca9ca",
"text": "To reach a real-time stereo vision in embedded systems, we propose in this paper, the adaptation and optimization of the well-known Disparity Space Image (DSI) on a single FPGA(Field programmable gate Arrays) that is designed for high efficiency when realized in hardware. An initial disparity map was calculated using the DSI structure and then a median filter was applied to smooth the disparity map. Many methods reported in the literature are mainly restricted to implement the SAD algorithm (Sum of Absolute Differences) on an FPGA. An evaluation of our method is done by comparing the obtained results of our method with a very fast and well-known sum of absolute differences algorithm using hardware-based implementations.",
"title": ""
},
{
"docid": "5948f08c1ca41b7024a4f7c0b2a99e5b",
"text": "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent, compared with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolution neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluate our DRNNs on the SemEval-2010 Task 8, and achieve an F1score of 85.81%, outperforming state-of-theart recorded results.",
"title": ""
},
{
"docid": "698dfa061afb89ac4dc768ec7a68ff1a",
"text": "a r t i c l e i n f o Social network sites such as Facebook give off the impression that others are doing better than we are. As a result, the use of these sites may lead to negative social comparison (i.e., feeling like others are doing better than oneself). According to social comparison theory, such negative social comparisons are detrimental to perceptions about the self. The current study therefore investigated the indirect relationship between Facebook use and self-perceptions through negative social comparison. Because happier people process social information differently than unhappier people, we also investigated whether the relationship between Facebook use and social comparison and, as a result, self-perception, differs depending on the degree of happiness of the emerging adult. A survey among 231 emerging adults (age 18–25) showed that Facebook use was related to a greater degree of negative social comparison, which was in turn related negatively to self-perceived social competence and physical attractiveness. The indirect relationship between Facebook use and self-perception through negative social comparison was attenuated among happier individuals, as the relationship between Facebook use and negative social comparison was weaker among happier individuals. SNS use was thus negatively related to self-perception through negative social comparison, especially among unhappy individuals. Social network sites (SNSs), such as Facebook, are notorious for giving off the impression that other people are living better lives than we are (Chou & Edge, 2012). People generally present themselves and their lives positively on SNSs (Dorethy, Fiebert, & Warren, 2014) for example by posting pictures in which they look their best (Manago, Graham, Greenfield, & Salimkhan, 2008) and are having a good time with their friends (Zhao, Grasmuck, & Martin, 2008). The vast majority of time spent on SNSs consists of viewing these idealized SNS profiles, pictures, and status updates of others (Pempek, Yermolayeva, & Calvert, 2009). Such information about how others are doing may impact how people see themselves, that is, their self-perceptions because people base their self-perceptions at least partly on how they are doing in comparison to others (Festinger, 1954). These potential effects of SNS use on self-perceptions through social comparison are the focus of the current study. Previous research on the effects of SNSs on self-perceptions has focused predominantly on the implications of social interactions on these websites (e.g., feedback from others) (Valkenburg, Peter, & Schouten, 2006) or due to editing and viewing content about the self …",
"title": ""
}
] |
scidocsrr
|
d8dc9f9ee05822e70db35ed133a192d8
|
SUMMARIZATION USING AGGREGATE SIMILARITY
|
[
{
"docid": "7b755f9b49187e9a77efc4a2327c80ad",
"text": "In this paper, each document is represented by a weighted graph called a text relationship map. In the graph, each node represents a vector of nouns in a sentence, an undirected link connects two nodes if two sentences are semantically related, and a weight on the link is a value of the similarity between a pair of sentences. The vector similarity can be computed as the inner product between corresponding vector elements. The similarity is based on the word overlap between the corresponding sentences. The importance of a node on the map, called an aggregate similarity, is defined as the sum of weights on the links connecting it to other nodes on the map. In this paper, we present a Korean text summarization system using the aggregate similarity. To evaluate our system, we used two test collections: one collection (PAPER-InCon) consists of 100 papers in the domain of computer science; the other collection (NEWS) is composed of 105 articles in the newspapers. Under the compression rate of 20%, we achieved the recall of 46.6% (PAPER-InCon) and 30.5% (NEWS), and the precision of 76.9% (PAPER-InCon) and 42.3% (NEWS). Experiments show that our system outperforms two commercial systems.",
"title": ""
}
] |
[
{
"docid": "5b392df7f03046bb8c15c8bdaa5a811f",
"text": "The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not “sharp,” the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis.",
"title": ""
},
{
"docid": "8d98529cd3fc92eba091e09ea223df4e",
"text": "Exploring small connected and induced subgraph patterns (CIS patterns, or graphlets) has recently attracted considerable attention. Despite recent efforts on computing the number of instances a specific graphlet appears in a large graph (i.e., the total number of CISes isomorphic to the graphlet), little attention has been paid to characterizing a node’s graphlet degree, i.e., the number of CISes isomorphic to the graphlet that include the node, which is an important metric for analyzing complex networks such as social and biological networks. Similar to global graphlet counting, it is challenging to compute node graphlet degrees for a large graph due to the combinatorial nature of the problem. Unfortunately, previous methods of computing global graphlet counts are not suited to solve this problem. In this paper we propose sampling methods to estimate node graphlet degrees for undirected and directed graphs, and analyze the error of our estimates. To the best of our knowledge, we are the first to study this problem and give a fast scalable solution. We conduct experiments on a variety of real-word datasets that demonstrate that our methods accurately and efficiently estimate node graphlet degrees for graphs with millions of edges.",
"title": ""
},
{
"docid": "d0690dcac9bf28f1fe6e2153035f898c",
"text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "39208755abbd92af643d0e30029f6cc0",
"text": "The biomedical community makes extensive use of text mining technology. In the past several years, enormous progress has been made in developing tools and methods, and the community has been witness to some exciting developments. Although the state of the community is regularly reviewed, the sheer volume of work related to biomedical text mining and the rapid pace in which progress continues to be made make this a worthwhile, if not necessary, endeavor. This chapter provides a brief overview of the current state of text mining in the biomedical domain. Emphasis is placed on the resources and tools available to biomedical researchers and practitioners, as well as the major text mining tasks of interest to the community. These tasks include the recognition of explicit facts from biomedical literature, the discovery of previously unknown or implicit facts, document summarization, and question answering. For each topic, its basic challenges and methods are outlined and recent and influential work is reviewed.",
"title": ""
},
{
"docid": "d3281adf2e84a5bab8b03ab9ee8a2977",
"text": "The concept of Learning Health Systems (LHS) is gaining momentum as more and more electronic healthcare data becomes increasingly accessible. The core idea is to enable learning from the collective experience of a care delivery network as recorded in the observational data, to iteratively improve care quality as care is being provided in a real world setting. In line with this vision, much recent research effort has been devoted to exploring machine learning, data mining and data visualization methodologies that can be used to derive real world evidence from diverse sources of healthcare data to provide personalized decision support for care delivery and care management. In this chapter, we will give an overview of a wide range of analytics and visualization components we have developed, examples of clinical insights reached from these components, and some new directions we are taking.",
"title": ""
},
{
"docid": "2aade03834c6db2ecc2912996fd97501",
"text": "User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.",
"title": ""
},
{
"docid": "273a959e67ada56252f62b3c921b5d52",
"text": "Metric learning for music is an important problem for many music information retrieval (MIR) applications such as music generation, analysis, retrieval, classification and recommendation. Traditional music metrics are mostly defined on linear transformations of handcrafted audio features, and may be improper in many situations given the large variety of music styles and instrumentations. In this paper, we propose a deep neural network named Triplet MatchNet to learn metrics directly from raw audio signals of triplets of music excerpts with human-annotated relative similarity in a supervised fashion. It has the advantage of learning highly nonlinear feature representations and metrics in this end-to-end architecture. Experiments on a widely used music similarity measure dataset show that our method significantly outperforms three state-of-the-art music metric learning methods. Experiments also show that the learned features better preserve the partial orders of the relative similarity than handcrafted features.",
"title": ""
},
{
"docid": "08844c98f9d6b92f84d272516af64281",
"text": "This paper describes the synthesis of Dynamic Differential Logic to increase the resistance of FPGA implementations against Differential Power Analysis. The synthesis procedure is developed and a detailed description is given of how EDA tools should be used appropriately to implement a secure digital design flow. Compared with an existing technique to implement Dynamic Differential Logic on FPGA, the technique saves a factor 2 in slice utilization. Experimental results also indicate that a secure version of the AES encryption algorithm can now be implemented with a mere 50% increase in time delay and 90% increase in slice utilization when compared with a normal non-secure single ended implementation.",
"title": ""
},
{
"docid": "5bff5809ff470084497011a1860148e0",
"text": "A statistical meta-analysis of the technology acceptance model (TAM) as applied in various fields was conducted using 88 published studies that provided sufficient data to be credible. The results show TAM to be a valid and robust model that has been widely used, but which potentially has wider applicability. A moderator analysis involving user types and usage types was performed to investigate conditions under which TAM may have different effects. The study confirmed the value of using students as surrogates for professionals in some TAM studies, and perhaps more generally. It also revealed the power of meta-analysis as a rigorous alternative to qualitative and narrative literature review methods. # 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "10f4398671e3dab3d8414554535511dd",
"text": "As mobile devices become more and more popular, mobile gaming has emerged as a promising market with billion-dollar revenues. A variety of mobile game platforms and services have been developed around the world. A critical challenge for these platforms and services is to understand the churn behavior in mobile games, which usually involves churn at micro level (between an app and a specific user) and macro level (between an app and all its users). Accurate micro-level churn prediction and macro-level churn ranking will benefit many stakeholders such as game developers, advertisers, and platform operators. In this paper, we present the first large-scale churn analysis for mobile games that supports both micro-level churn prediction and macrolevel churn ranking. For micro-level churn prediction, in view of the common limitations of the state-of-the-art methods built upon traditional machine learning models, we devise a novel semi-supervised and inductive embedding model that jointly learns the prediction function and the embedding function for user-app relationships. We model these two functions by deep neural networks with a unique edge embedding technique that is able to capture both contextual information and relationship dynamics. We also design a novel attributed random walk technique that takes into consideration both topological adjacency and attribute similarities. To address macro-level churn ranking, we propose to construct a relationship graph with estimated micro-level churn probabilities as edge weights and adapt link analysis algorithms on the graph. We devise a simple algorithm SimSum and adapt two more advanced algorithms PageRank and HITS. The performance of our solutions for the two-level churn analysis problems is evaluated on real-world data collected from the Samsung Game Launcher platform. The data includes tens of thousands of mobile games and hundreds of millions of user∗ This work was done during the authors’ internships at Samsung Research America Received xxx Revised xxx Accepted xxx ar X iv :1 90 1. 06 24 7v 1 [ cs .L G ] 1 4 Ja n 20 19",
"title": ""
},
{
"docid": "424a0f5f4a725b85fabb8c7ee19c6e3c",
"text": "The data on dental variability in natural populations of sibling species of common voles (“arvalis” group, genus Microtus) from European and Asian parts of the species’ ranges are summarized using a morphotype-based approach to analysis of dentition. Frequency distributions of the first lower (m1) and the third upper (M3) molar morphotypes are analyzed in about 65 samples of M. rossiaemeridionalis and M. arvalis represented by arvalis and obscurus karyotypic forms. Because of extreme similarity of morphotype dental patterns in the taxa studied, it is impossible to use molar morphotype frequencies for species identification. However, a morphotype-based approach to analysis of dental variability does allow analysis of inter-species comparisons from an evolutionary standpoint. Three patterns of dental complexity are established in the taxa studied: simple, basic (the most typical within the ranges of both species), and complex. In M. rossiaemeridionalis and in M. arvalis obscurus only the basic pattern of dentition occurs. In M. arvalis arvalis, both simple and basic dental patterns are found. Analysis of association of morphotype dental patterns with geographical and environmental variables reveals an increase in the number of complex molars with longitude and latitude: in M. arvalis the pattern of molar complication is more strongly related to longitude, and in M. rossiaemeridionalis—to latitude. Significant decrease in incidence of simple molars with climate continentality and increasing aridity is found in M. arvalis. The simple pattern of dentition is found in M. arvalis arvalis in Spain, along the Atlantic coast of France and on islands thereabout, in northeastern Germany and Kirov region in European Russia. Hypotheses to explain the distribution of populations with different dental patterns within the range of M. arvalis sensu stricto are discussed.",
"title": ""
},
{
"docid": "e22378cc4ae64e9c3abbd4b308198fb6",
"text": "Knowledge about the argumentative structure of scientific articles can, amongst other things, be used to improve automatic abstracts. We argue that the argumentative structure of scientific discourse can be automatically detected because reasordng about problems, research tasks and solutions follows predictable patterns. Certain phrases explicitly mark the rhetorical status (communicative function) of sentences with respect to the global argumentative goal. Examples for such meta-diacaurse markers are \"in this paper, we have p r e s e n t e d . . . \" or \"however, their method fails to\". We report on work in progress about recognizing such meta-comments automatically in research articles from two disciplines: computational linguistics and medicine (cardiology). 1 M o t i v a t i o n We are interested in a formal description of the document s t ructure of scientific articles from different disciplines. Such a description could be of practical use for many applications in document management; our specific mot ivat ion for detecting document structure is qual i ty improvement in automatic abstracting. Researchem in the field of automatic abstracting largely agree that it is currently not technically feasible to create automatic abstracts based on full text unders tanding (Sparck Jones 1994). As a result, many researchers have turned to sentence extraction (Kupiec, Pedersen, & Chen 1995; Brandow, Mitze, & Rau 1995; Hovy & Lin 1997). Sentence extraction, which does not involve any deep analysis, has the huge advantage of being robust with respect to individual writing style, discipline and text type (genre). Instead of producing a b s t r a c t , this results produces only extracts: documen t surrogates consisting of a number of sentences selected verbat im from the original text. We consider a concrete document retrieval (DR) scenario in which a researcher wants to select one or more scientific articles from a large scientific database (or even f rom the Internet) for further inspection. The ma in task for the searcher is relevance decision for each paper: she needs to decide whether or not to spend more t ime on a paper (read or skim-read it), depending on how useful it presumably is to her current information needs. Traditional sentence extracts can be used as rough-and-ready relevance indicators for this task, but they are not doing a great job at representing the contents of the original document: searchers often get the wrong idea about what the text is about. Much of this has to do with the fact that extracts are typically incoherent texts, consisting of potential ly unrelated sentences which have been taken out of their context. Crucially, extracts have no handle at revealing the text 's logical and semantic organisation. More sophisticated, user-tailored abstracts could help the searcher make a fast, informed relevance decision by taking factors like the searcher's expertise and current information need into account. If the searcher is dealing with research she knows well, her information needs might be quite concrete: during the process of writing her own paper she might want to find research which supports her own claims, find out if there are contradictory results to hers in the literature, or compare her results to those of researchers using a similar methodology. A different information need arises when she wants to gain an overview of a new research area as an only \"partially informed user\" in this field (Kircz 1991) she will need to find out about specific research goals, the names of the researchers who have contributed the main research ideas in a given time period, along with information of methodology and results in this research field. There are new functions these abstracts could fulfil. In order to make an informed relevance decision, the searcher needs to judge differences and similarities between papers, e.g. how a given paper relates to similar papers with respect to research goals or methodology, so that she can place the research described in a given paper in the larger picture of the field, a function we call navigation between research articles. A similar operation is navigation within a paper, which supports searchers in non-linear reading and allows them to find relevant information faster, e.g. numerical results. We believe that a document surrogate that aims at supporting such functions should characterize research articles in terms of the problems, research tasks and",
"title": ""
},
{
"docid": "59c757aa28dcb770ecf5b01dc26ba087",
"text": "Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).",
"title": ""
},
{
"docid": "fc779c615e0661c6247998532fee55cc",
"text": "This paper presents a challenge to the community: given a large corpus of written text aligned to its normalized spoken form, train an RNN to learn the correct normalization function. We present a data set of general text where the normalizations were generated using an existing text normalization component of a text-to-speech system. This data set will be released open-source in the near future. We also present our own experiments with this data set with a variety of different RNN architectures. While some of the architectures do in fact produce very good results when measured in terms of overall accuracy, the errors that are produced are problematic, since they would convey completely the wrong message if such a system were deployed in a speech application. On the other hand, we show that a simple FST-based filter can mitigate those errors, and achieve a level of accuracy not achievable by the RNN alone. Though our conclusions are largely negative on this point, we are actually not arguing that the text normalization problem is intractable using an pure RNN approach, merely that it is not going to be something that can be solved merely by having huge amounts of annotated text data and feeding that to a general RNN model. Andwhenwe open-source our data, we will be providing a novel data set for sequenceto-sequence modeling in the hopes that the the community can find better solutions.",
"title": ""
},
{
"docid": "b8c683c194792a399f9c12fdf7e9f0cd",
"text": "The rise of Social Media services in the last years has created huge streams of information that can be very valuable in a variety of scenarios. What precisely these scenarios are and how the data streams can efficiently be analyzed for each scenario is still largely unclear at this point in time and has therefore created significant interest in industry and academia. In this paper, we describe a novel algorithm for geo-spatial event detection on Social Media streams. We monitor all posts on Twitter issued in a given geographic region and identify places that show a high amount of activity. In a second processing step, we analyze the resulting spatio-temporal clusters of posts with a Machine Learning component in order to detect whether they constitute real-world events or not. We show that this can be done with high precision and recall. The detected events are finally displayed to a user on a map, at the location where they happen and while they happen.",
"title": ""
},
{
"docid": "a41c9650da7ca29a51d310cb4a3c814d",
"text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance",
"title": ""
},
{
"docid": "fbfbb339657f2a0a97f8a65dfb99ffbc",
"text": "This work describes a novel technique of designing a high gain low noise CMOS instrumentation amplifier for biomedical applications like ECG signal processing. A three opamp instrumentation amplifier have been designed by using two simple op-amps at the two input stages and a folded cascode opamp at the output stage. Both op-amps at the input and output are 2-stage. Most of the previous or earlier designed op-amp in literature uses same type of op-amp at the input and output stages of instrumentation amplifier. By using folded cascode op-amp at the output, we had achieved significant improvement in gain and CMRR. Transistors sizing plays a major role in achieving high gain and CMRR. To achieve a desirable common mode rejection ratio (CMRR), Gain and other performance metrics, selection of most appropriable op-amp circuit topologies & optimum transistor sizing was the main criteria for designing of instrumentation amplifier for biomedical applications. The complete instrumentation amplifier design is simulated using Cadence Spectre tool and layout is designed and simulated in Cadence Layout editor at 0.18μm CMOS technology. Each of the input two stage op-amp provides a gain and CMRR of 45dB and 72dB respectively. The output two stage folded cascode amplifier provides a CMRR of 92dB and a gain of 82dB. The design achieves an overall CMRR and gain of 92dB and 67db respectively. The overall power consumed by instrumentation amplifier is 263μW which is suitable for biomedical signal processing applications.",
"title": ""
},
{
"docid": "5932b3f1f0523f07190855e51abc04b9",
"text": "This paper proposes an optimization algorithm based on how human fight and learn from each duelist. Since this algorithm is based on population, the proposed algorithm starts with an initial set of duelists. The duel is to determine the winner and loser. The loser learns from the winner, while the winner try their new skill or technique that may improve their fighting capabilities. A few duelists with highest fighting capabilities are called as champion. The champion train a new duelists such as their capabilities. The new duelist will join the tournament as a representative of each champion. All duelist are re-evaluated, and the duelists with worst fighting capabilities is eliminated to maintain the amount of duelists. Two optimization problem is applied for the proposed algorithm, together with genetic algorithm, particle swarm optimization and imperialist competitive algorithm. The results show that the proposed algorithm is able to find the better global optimum and faster iteration. Keywords—Optimization; global, algorithm; duelist; fighting",
"title": ""
},
{
"docid": "bfa87a59940f6848d8d5b53b89c16735",
"text": "The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.",
"title": ""
}
] |
scidocsrr
|
e8aa3724c0874026b8a2e1e6b929e8e0
|
The Structure and Performance of Efficient Interpreters
|
[
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
}
] |
[
{
"docid": "d35515299b37b5eb936986d33aca66e1",
"text": "This paper describes an Ada framework called Cheddar which provides tools to check if a real time application meets its temporal constraints. The framework is based on the real time scheduling theory and is mostly written for educational purposes. With Cheddar, an application is defined by a set of processors, tasks, buffers, shared resources and messages. Cheddar provides feasibility tests in the cases of monoprocessor, multiprocessor and distributed systems. It also provides a flexible simulation engine which allows the designer to describe and run simulations of specific systems. The framework is open and has been designed to be easily connected to CASE tools such as editors, design tools, simulators, ...",
"title": ""
},
{
"docid": "65c9ce95eb92ad4be2caf4b4a6a0bdd8",
"text": "The electricity industry is now at the verge of a new era-an era that promises, through the evolution of the existing electrical grids to smart grids, more efficient and effective power management, better reliability, reduced production costs, and more environmentally friendly energy generation. Numerous initiatives across the globe, led by both industry and academia, reflect the mounting interest around not only the enormous benefits but also the great risks introduced by this evolution. This paper focuses on issues related to the security of the smart grid and the smart home, which we present as an integral part of the smart grid. Based on several scenarios, we aim to present some of the most representative threats to the smart home/smart grid environment. The threats detected are categorized according to specific security goals set for the smart home/smart grid environment, and their impact on the overall system security is evaluated. A review of contemporary literature is then conducted with the aim of presenting promising security countermeasures with respect to the identified specific security goals for each presented scenario. An effort to shed light on open issues and future research directions concludes this paper.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "5611107338100a2d202f7dbde5fd39ac",
"text": "This experiment investigated the ability of rats with dorsal striatal or fornix damage to learn the location of a visible platform in a water maze. We also assessed the animals' ability to find the platform when it was hidden (submerged). Rats with neurotoxic damage to the dorsal striatum acquired both the visible and hidden platform versions of the task, but when required to choose between the spatial location they had learned and the visible platform in a new location they swam first to the old spatial location. Rats with radio-frequency damage to the fornix acquired the visible platform version of the water maze task but failed to learn about the platform's location in space. When the visible platform was moved to a new location they swam directly to it. Normal rats acquired both the visible and hidden platform versions of the task. These findings suggest that in the absence of a functional neural system that includes dorsal striatum, spatial information predominantly controlled behavior even in the presence of a cue that the animals had previously been reinforced for approaching. In the absence of a functional hippocampal system behavior was not affected by spatial information and responding to local reinforced cues was enhanced. The results support the idea that different neural substrates in the mammalian nervous system acquire different types of information simultaneously and in parallel.",
"title": ""
},
{
"docid": "39bf990d140eb98fa7597de1b6165d49",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "fb6494dcf01a927597ff784a3323e8c2",
"text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.",
"title": ""
},
{
"docid": "aa5d8162801abcc81ac542f7f2a423e5",
"text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).",
"title": ""
},
{
"docid": "0d11074054a2921c90d028c54010193b",
"text": "Aggressively scaling the supply voltage of SRAMs greatly minimizes their active and leakage power, a dominating portion of the total power in modern ICs. Hence, energy constrained applications, where performance requirements are secondary, benefit significantly from an SRAM that offers read and write functionality at the lowest possible voltage. However, bit-cells and architectures achieving very high density conventionally fail to operate at low voltages. This paper describes a high density SRAM in 65 nm CMOS that uses an 8T bit-cell to achieve a minimum operating voltage of 350 mV. Buffered read is used to ensure read stability, and peripheral control of both the bit-cell supply voltage and the read-buffer's foot voltage enable sub-T4 write and read without degrading the bit-cell's density. The plaguing area-offset tradeoff in modern sense-amplifiers is alleviated using redundancy, which reduces read errors by a factor of five compared to device up-sizing. At its lowest operating voltage, the entire 256 kb SRAM consumes 2.2 muW in leakage power.",
"title": ""
},
{
"docid": "300028d1aa1eda913737c1e7ba6b61f7",
"text": "We consider the task of detecting regulatory elements in the human genome directly from raw DNA. Past work has focused on small snippets of DNA, making it difficult to model long-distance dependencies that arise from DNA’s 3-dimensional conformation. In order to study long-distance dependencies, we develop and release a novel dataset for a larger-context modeling task. Using this new data set we model long-distance interactions using dilated convolutional neural networks, and compare them to standard convolutions and recurrent neural networks. We show that dilated convolutions are effective at modeling the locations of regulatory markers in the human genome, such as transcription factor binding sites, histone modifications, and DNAse hypersensitivity sites.",
"title": ""
},
{
"docid": "cacf4a2d7004bccecb0e8965de695e69",
"text": "The WebNLG challenge consists in mapping sets of RDF triples to text. It provides a common benchmark on which to train, evaluate and compare “microplanners”, i.e. generation systems that verbalise a given content by making a range of complex interacting choices including referring expression generation, aggregation, lexicalisation, surface realisation and sentence segmentation. In this paper, we introduce the microplanning task, describe data preparation, introduce our evaluation methodology, analyse participant results and provide a brief description of the participating systems.",
"title": ""
},
{
"docid": "861f76c061b9eb52ed5033bdeb9a3ce5",
"text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas",
"title": ""
},
{
"docid": "403dc89a0b74e68dda095dde756d44f0",
"text": "The prefrontal cortex subserves executive control--that is, the ability to select actions or thoughts in relation to internal goals. Here, we propose a theory that draws upon concepts from information theory to describe the architecture of executive control in the lateral prefrontal cortex. Supported by evidence from brain imaging in human subjects, the model proposes that action selection is guided by hierarchically ordered control signals, processed in a network of brain regions organized along the anterior-posterior axis of the lateral prefrontal cortex. The theory clarifies how executive control can operate as a unitary function, despite the requirement that information be integrated across multiple distinct, functionally specialized prefrontal regions.",
"title": ""
},
{
"docid": "b44d6d71650fc31c643ac00bd45772cd",
"text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.",
"title": ""
},
{
"docid": "b15bb888a11444f614b4e45317550830",
"text": "Transactional Memory (TM) is emerging as a promising technology to simplify parallel programming. While several TM systems have been proposed in the research literature, we are still missing the tools and workloads necessary to analyze and compare the proposals. Most TM systems have been evaluated using microbenchmarks, which may not be representative of any real-world behavior, or individual applications, which do not stress a wide range of execution scenarios. We introduce the Stanford Transactional Application for Multi-Processing (STAMP), a comprehensive benchmark suite for evaluating TM systems. STAMP includes eight applications and thirty variants of input parameters and data sets in order to represent several application domains and cover a wide range of transactional execution cases (frequent or rare use of transactions, large or small transactions, high or low contention, etc.). Moreover, STAMP is portable across many types of TM systems, including hardware, software, and hybrid systems. In this paper, we provide descriptions and a detailed characterization of the applications in STAMP. We also use the suite to evaluate six different TM systems, identify their shortcomings, and motivate further research on their performance characteristics.",
"title": ""
},
{
"docid": "74fcade8e5f5f93f3ffa27c4d9130b9f",
"text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"title": ""
},
{
"docid": "1b812ef6c607790a0dbcf5e050871fc2",
"text": "This paper introduces Adaptive Music for Affect Improvement (AMAI), a music generation and playback system whose goal is to steer the listener towards a state of more positive affect. AMAI utilizes techniques from game music in order to adjust elements of the music being heard; such adjustments are made adaptively in response to the valence levels of the listener as measured via facial expression and emotion detection. A user study involving AMAI was conducted, with N=19 participants across three groups, one for each strategy of Discharge, Diversion, and Discharge→ Diversion. Significant differences in valence levels between music-related stages of the study were found between the three groups, with Discharge → Diversion exhibiting the greatest increase in valence, followed by Diversion and finally Discharge. Significant differences in positive affect between groups were also found in one before-music and after-music pair of self-reported affect surveys, with Discharge→ Diversion exhibiting the greatest decrease in positive affect, followed by Diversion and finally Discharge; the resulting differences in facial expression valence and self-reported affect offer contrasting con-",
"title": ""
},
{
"docid": "9292f1925de5d6df9eb89b2157842e5c",
"text": "According to Breast Cancer Institute (BCI), Breast Cancer is one of the most dangerous type of diseases that is very effective for women in the world. As per clinical expert detecting this cancer in its first stage helps in saving lives. As per cancer.net offers individualized guides for more than 120 types of cancer and related hereditary syndromes. For detecting breast cancer mostly machine learning techniques are used. In this paper we proposed adaptive ensemble voting method for diagnosed breast cancer using Wisconsin Breast Cancer database. The aim of this work is to compare and explain how ANN and logistic algorithm provide better solution when its work with ensemble machine learning algorithms for diagnosing breast cancer even the variables are reduced. In this paper we used the Wisconsin Diagnosis Breast Cancer dataset. When compared to related work from the literature. It is shown that the ANN approach with logistic algorithm is achieved 98.50% accuracy from another machine learning algorithm.",
"title": ""
},
{
"docid": "5fc8afbe7d55af3274d849d1576d3b13",
"text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.",
"title": ""
},
{
"docid": "0d20f5ae084c6ca4e7a834e1eee1e84c",
"text": "Gantry-tilted helical multi-slice computed tomography (CT) refers to the helical scanning CT system equipped with multi-row detector operating at some gantry tilting angle. Its purpose is to avoid the area which is vulnerable to the X-ray radiation. The local tomography is to reduce the total radiation dose by only scanning the region of interest for image reconstruction. In this paper we consider the scanning scheme, and incorporate the local tomography technique with the gantry-tilted helical multi-slice CT. The image degradation problem caused by gantry tilting is studied, and a new error correction method is proposed to deal with this problem in the local CT. Computer simulation shows that the proposed method can enhance the local imaging performance in terms of image sharpness and artifacts reduction",
"title": ""
},
{
"docid": "a2eee3cd0e8ee3e97af54f11b8a29fc9",
"text": "Internet Service Providers (ISPs) are responsible for transmitting and delivering their customers’ data requests, ranging from requests for data from websites, to that from filesharing applications, to that from participants in Voice over Internet Protocol (VoIP) chat sessions. Using contemporary packet inspection and capture technologies, ISPs can investigate and record the content of unencrypted digital communications data packets. This paper explains the structure of these packets, and then proceeds to describe the packet inspection technologies that monitor their movement and extract information from the packets as they flow across ISP networks. After discussing the potency of contemporary deep packet inspection devices, in relation to their earlier packet inspection predecessors, and their potential uses in improving network operators’ network management systems, I argue that they should be identified as surveillance technologies that can potentially be incredibly invasive. Drawing on Canadian examples, I argue that Canadian ISPs are using DPI technologies to implicitly ‘teach’ their customers norms about what are ‘inappropriate’ data transfer programs, and the appropriate levels of ISP manipulation of customer data traffic. Version 1.2 :: January 10, 2008. * Doctoral student in the University of Victoria’s Political Science department. Thanks to Colin Bennett, Andrew Clement, Fenwick Mckelvey and Joyce Parsons for comments.",
"title": ""
}
] |
scidocsrr
|
9024ad2b909493bd511fc45ef0308be2
|
An image-warping VR-architecture: design, implementation and applications
|
[
{
"docid": "8745e21073db143341e376bad1f0afd7",
"text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR",
"title": ""
}
] |
[
{
"docid": "f8d0929721ba18b2412ca516ac356004",
"text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.",
"title": ""
},
{
"docid": "6e07a006d4e34f35330c74116762a611",
"text": "Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.",
"title": ""
},
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "27ef8bac566dbba418870036ed555b1a",
"text": "Seemingly unrelated regression (SUR) models are useful in studying the interactions among different variables. In a high dimensional setting or when applied to large panel of time series, these models require a large number of parameters to be estimated and suffer of inferential problems. To avoid overparametrization and overfitting issues, we propose a hierarchical Dirichlet process prior for SUR models, which allows shrinkage of SUR coefficients toward multiple locations and identification of group of coefficients. We propose a two-stage hierarchical prior distribution, where the first stage of the hierarchy consists in a Lasso conditionally independent prior distribution of the NormalGamma family for the SUR coefficients. The second stage is given by a random mixture distribution for the Normal-Gamma hyperparameters, which allows for parameter parsimony through two components: the first one is a random Dirac point-mass distribution, which induces sparsity in the SUR coefficients; the second is a Dirichlet process prior, which allows for clustering of the SUR coefficients. Our sparse SUR model with multiple locations, scales and shapes includes the Vector autoregressive models (VAR) and dynamic panel models as special cases. We consider an international business cycle applications to show the effectiveness of our model and inference approach. Our new multiple shrinkage prior model allows us to better understand shock transmission phenomena, to extract coloured networks and to classify the linkages strenght. The empirical results represent a different point of view on international business cycles providing interesting new findings in the relationship between core and pheriphery countries.",
"title": ""
},
{
"docid": "5d40cae84395cc94d68bd4352383d66b",
"text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.",
"title": ""
},
{
"docid": "a5f9b7b7b25ccc397acde105c39c3d9d",
"text": "Processors with multiple cores and complex cache coherence protocols are widely employed to improve the overall performance. It is a major challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with the number of cores. In this paper, we propose an efficient test generation technique, which can be used to achieve full state and transition coverage in simulation based verification for a wide variety of cache coherence protocols. Based on effective analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) compared with tests generated by breadth first search. Moreover, our proposed approach can generate tests on-the-fly due to its space efficient design.",
"title": ""
},
{
"docid": "590ad5ce089e824d5e9ec43c54fa3098",
"text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.",
"title": ""
},
{
"docid": "e46943cc1c73a56093d4194330d52d52",
"text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS",
"title": ""
},
{
"docid": "30e15e8a3e6eaf424b2f994d2631ac37",
"text": "This paper presents a volumetric stereo and silhouette fusion algorithm for acquiring high quality models from multiple calibrated photographs. Our method is based on computing and merging depth maps. Different from previous methods of this category, the silhouette information is also applied in our algorithm to recover the shape information on the textureless and occluded areas. The proposed algorithm starts by computing visual hull using a volumetric method in which a novel projection test method is proposed for visual hull octree construction. Then, the depth map of each image is estimated by an expansion-based approach that returns a 3D point cloud with outliers and redundant information. After generating an oriented point cloud from stereo by rejecting outlier, reducing scale, and estimating surface normal for the depth maps, another oriented point cloud from silhouette is added by carving the visual hull octree structure using the point cloud from stereo to restore the textureless and occluded surfaces. Finally, Poisson Surface Reconstruction approach is applied to convert the oriented point cloud both from stereo and silhouette into a complete and accurate triangulated mesh model. The proposed approach has been implemented and the performance of the approach is demonstrated on several real data sets, along with qualitative comparisons with the state-of-the-art image-based modeling techniques according to the Middlebury benchmark.",
"title": ""
},
{
"docid": "1fcd6f0c91522a91fa05b0d969f8eec1",
"text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.",
"title": ""
},
{
"docid": "e0f0ccb0e1c2f006c5932f6b373fb081",
"text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.",
"title": ""
},
{
"docid": "4f296caa2ee4621a8e0858bfba701a3b",
"text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the",
"title": ""
},
{
"docid": "bf85db5489a61b5fca8d121de198be97",
"text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.",
"title": ""
},
{
"docid": "8af844944f6edee4c271d73a552dc073",
"text": "Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "5dec9852efc32d0a9b93cd173573abf0",
"text": "Magnitudes and timings of kinematic variables have often been used to investigate technique. Where large inter-participant differences exist, as in basketball, analysis of intra-participant variability may provide an alternative indicator of good technique. The aim of the present study was to investigate the joint kinematics and coordination-variability between missed and successful (swishes) free throw attempts. Collegiate level basketball players performed 20 free throws, during which ball release parameters and player kinematics were recorded. For each participant, three misses and three swishes were randomly selected and analysed. Margins of error were calculated based on the optimal-minimum-speed principle. Differences in outcome were distinguished by ball release speeds statistically lower than the optimal speed (misses -0.12 +/- 0.10m s(-1); swishes -0.02 +/- 0.07m s(-1); P < 0.05). No differences in wrist linear velocity were detected, but as the elbow influences the wrist through velocity-dependent-torques, elbow-wrist angle-angle coordination-variability was quantified using vector-coding and found to increase in misses during the last 0.01 s before ball release (P < 0.05). As the margin of error on release parameters is small, the coordination-variability is small, but the increased coordination-variability just before ball release for misses is proposed to arise from players perceiving the technique to be inappropriate and trying to correct the shot. The synergy or coupling relationship between the elbow and wrist angles to generate the appropriate ball speed is proposed as the mechanism determining success of free-throw shots in experienced players.",
"title": ""
},
{
"docid": "dd5c0dc27c0b195b1b8f2c6e6a5cea88",
"text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.",
"title": ""
},
{
"docid": "af5a2ad28ab61015c0344bf2e29fe6a7",
"text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.",
"title": ""
}
] |
scidocsrr
|
41b4bd5410ae9034056f7a4453a51680
|
Amulet: An Energy-Efficient, Multi-Application Wearable Platform
|
[
{
"docid": "1f95cc7adafe07ad9254359ab405a980",
"text": "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.",
"title": ""
},
{
"docid": "5fd6462e402e3a3ab1e390243d80f737",
"text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.",
"title": ""
},
{
"docid": "9bcc81095c32ea39de23217983d33ddc",
"text": "The Internet of Things (IoT) is characterized by heterogeneous devices. They range from very lightweight sensors powered by 8-bit microcontrollers (MCUs) to devices equipped with more powerful, but energy-efficient 32-bit processors. Neither a traditional operating system (OS) currently running on Internet hosts, nor typical OS for sensor networks are capable to fulfill the diverse requirements of such a wide range of devices. To leverage the IoT, redundant development should be avoided and maintenance costs should be reduced. In this paper we revisit the requirements for an OS in the IoT. We introduce RIOT OS, an OS that explicitly considers devices with minimal resources but eases development across a wide range of devices. RIOT OS allows for standard C and C++ programming, provides multi-threading as well as real-time capabilities, and needs only a minimum of 1.5 kB of RAM.",
"title": ""
}
] |
[
{
"docid": "7e1e475f5447894a6c246e7d47586c4b",
"text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.",
"title": ""
},
{
"docid": "77f3dfeba56c3731fda1870ce48e1aca",
"text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.",
"title": ""
},
{
"docid": "43bab96fad8afab1ea350e327a8f7aec",
"text": "The traditional databases are not capable of handling unstructured data and high volumes of real-time datasets. Diverse datasets are unstructured lead to big data, and it is laborious to store, manage, process, analyze, visualize, and extract the useful insights from these datasets using traditional database approaches. However, many technical aspects exist in refining large heterogeneous datasets in the trend of big data. This paper aims to present a generalized view of complete big data system which includes several stages and key components of each stage in processing the big data. In particular, we compare and contrast various distributed file systems and MapReduce-supported NoSQL databases concerning certain parameters in data management process. Further, we present distinct distributed/cloud-based machine learning (ML) tools that play a key role to design, develop and deploy data models. The paper investigates case studies on distributed ML tools such as Mahout, Spark MLlib, and FlinkML. Further, we classify analytics based on the type of data, domain, and application. We distinguish various visualization tools pertaining three parameters: functionality, analysis capabilities, and supported development environment. Furthermore, we systematically investigate big data tools and technologies (Hadoop 3.0, Spark 2.3) including distributed/cloud-based stream processing tools in a comparative approach. Moreover, we discuss functionalities of several SQL Query tools on Hadoop based on 10 parameters. Finally, We present some critical points relevant to research directions and opportunities according to the current trend of big data. Investigating infrastructure tools for big data with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.",
"title": ""
},
{
"docid": "c6aa0e5f93d02fdd07e55dfa62aac6bc",
"text": "While CNNs naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit sparsity in both the feature maps and the filter weights, and thereby allow for significantly lower memory footprints and computation times than the conventional dense framework when processing data with a high degree of sparsity. Our scheme provides (i) an efficient GPU implementation of a convolution layer based on direct, sparse convolution; (ii) a filter step within the convolution layer, which we call attention, that prevents fill-in, i.e., the tendency of convolution to rapidly decrease sparsity, and guarantees an upper bound on the computational resources; and (iii) an adaptation of the backpropagation algorithm, which makes it possible to combine our approach with standard learning frameworks, while still exploiting sparsity in the data and the model.",
"title": ""
},
{
"docid": "894e945c9bb27f5464d1b8f119139afc",
"text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.",
"title": ""
},
{
"docid": "0e6bdfbfb3d47042a3a4f38c0260180c",
"text": "Named Entity Recognition is an important task but is still relatively new for Vietnamese. It is partly due to the lack of a large annotated corpus. In this paper, we present a systematic approach in building a named entity annotated corpus while at the same time building rules to recognize Vietnamese named entities. The resulting open source system achieves an F-measure of 83%, which is better compared to existing Vietnamese NER systems. © 2010 Springer-Verlag Berlin Heidelberg. Index",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "caae1bbaf151f876f102a1e3e6bd5266",
"text": "It is well-known that information and communication technologies enable many tasks in the context of precision agriculture. In fact, more and more farmers and food and agriculture companies are using precision agriculture-based systems to enhance not only their products themselves, but also their means of production. Consequently, problems arising from large amounts of data management and processing are arising. It would be very useful to have an infrastructure that allows information and agricultural tasks to be efficiently shared and handled. The cloud computing paradigm offers a solution. In this study, a cloud-based software architecture is proposed with the aim of enabling a complete crop management system to be deployed and validated. Such architecture includes modules developed by using Google App Engine, which allows the information to be easily retrieved and processed and agricultural tasks to be properly defined and planned. Additionally, Google’s Datastore (which ensures a high scalability degree), hosts both information that describes such agricultural tasks and agronomic data. The architecture has been validated in a system that comprises a wireless sensor network with fixed nodes and a mobile node on an unmanned aerial vehicle (UAV), deployed in an agricultural farm in the Region of Murcia (Spain). Such a network allows soil water and plant status to be monitored. The UAV (capable of executing missions defined by an administrator) is useful for acquiring visual information in an autonomous manner (under operator supervision, if needed). The system performance has been analysed and results that demonstrate the benefits of using the proposed architecture are detailed.",
"title": ""
},
{
"docid": "414f3647551a4cadeb05143d30230dec",
"text": "Future cellular networks are faced with the challenge of coping with significant traffic growth without increasing operating costs. Network virtualization and Software Defined Networking (SDN) are emerging solutions for fine-grained control and management of networks. In this article, we present a new dynamic tunnel switching technique for SDN-based cellular core networks. The technique introduces a virtualized Evolved Packet Core (EPC) gateway with the capability to select and dynamically switch the user plane processing element for each user. Dynamic GPRS Tunneling Protocol (GTP) termination enables switching the mobility anchor of an active session between a cloud environment, where general purpose hardware is in use, and a fast path implemented with dedicated hardware. We describe a prototype implementation of the technique based on an OpenStack cloud, an OpenFlow controller with GTP tunnel switching, and a dedicated fast path element.",
"title": ""
},
{
"docid": "cec9f586803ffc8dc5868f6950967a1f",
"text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.",
"title": ""
},
{
"docid": "033d7d924481a9429c03bb4bcc7b12fc",
"text": "BACKGROUND\nThis study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection.\n\n\nMETHODS\n42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA).\n\n\nRESULTS\nAlmost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively.\n\n\nCONCLUSIONS\nThe results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.",
"title": ""
},
{
"docid": "e1a1faf5d2121a3d5cd993d0f9c257a5",
"text": "This paper is the product of an area-exam study. It intends to explain the concept of ontology in the context of knowledge engineering research, which is a sub-area of artiicial intelligence research. It introduces the state of the art on methodologies and tools for building ontologies. It also tries to point out some possible future directions for ontology research.",
"title": ""
},
{
"docid": "ec97d6daf87e79dfc059a022d38e4ff2",
"text": "There are numerous passive contrast sensing autofocus algorithms that are well documented in literature, but some aspects of their comparative performance have not been widely researched. This study explores the relative merits of a set of autofocus algorithms via examining them against a variety of scene conditions. We create a statistics engine that considers a scene taken through a range of focal values and then computes the best focal position using each autofocus algorithm. The process is repeated across a survey of test scenes containing different representative conditions. The results are assessed against focal positions which are determined by manually focusing the scenes. Through examining these results, we then derive conclusions about the relative merits of each autofocus algorithm with respect to the criteria accuracy and unimodality. Our study concludes that the basic 2D spatial gradient measurement approaches yield the best autofocus results in terms of accuracy and unimodality.",
"title": ""
},
{
"docid": "c63ce594f3e940783ae24494a6cb1aa9",
"text": "In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. The new system contains two parts: a deep neural network (DNN) based sequence tagging model and a deep reinforcement learning (DRL) based augmented tagger. The augmented tagger helps improve system performance by modeling the data with minority tags. The new system is evaluated on SLU and NLU sequence tagging tasks using ATIS and CoNLL2003 benchmark datasets, to demonstrate the new system’s outstanding performance on general tagging tasks. Evaluated by F1 scores, it shows that the new system outperforms the current state-of-the-art model on ATIS dataset by 1.9 % and that on CoNLL-2003 dataset by 1.4 %.",
"title": ""
},
{
"docid": "64c06bffe4aeff54fbae9d87370e552c",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "5846c9761ec90040feaf71656401d6dd",
"text": "Internet of Things (IoT) is an emergent technology that provides a promising opportunity to improve industrial systems by the smartly use of physical objects, systems, platforms and applications that contain embedded technology to communicate and share intelligence with each other. In recent years, a great range of industrial IoT applications have been developed and deployed. Among these applications, the Water and Oil & Gas Distribution System is tremendously important considering the huge amount of fluid loss caused by leakages and other possible hydraulic failures. Accordingly, to design an accurate Fluid Distribution Monitoring System (FDMS) represents a critical task that imposes a serious study and an adequate planning. This paper reviews the current state-of-the-art of IoT, major IoT applications in industries and focus more on the Industrial IoT FDMS (IIoT FDMS).",
"title": ""
},
{
"docid": "5edc36b296a14950b366e0b3c4ba570c",
"text": "e ecient management of data is an important prerequisite for realising the potential of the Internet of ings (IoT). Two issues given the large volume of structured time-series IoT data are, addressing the diculties of data integration between heterogeneous ings and improving ingestion and query performance across databases on both resource-constrained ings and in the cloud. In this paper, we examine the structure of public IoT data and discover that the majority exhibit unique at, wide and numerical characteristics with a mix of evenly and unevenly-spaced time-series. We investigate the advances in time-series databases for telemetry data and combine these ndings with microbenchmarks to determine the best compression techniques and storage data structures to inform the design of a novel solution optimised for IoT data. A query translation method with low overhead even on resource-constrained ings allows us to utilise rich data models like the Resource Description Framework (RDF) for interoperability and data integration on top of the optimised storage. Our solution, TritanDB, shows an order of magnitude performance improvement across both ings and cloud hardware on many state-of-the-art databases within IoT scenarios. Finally, we describe how TritanDB supports various analyses of IoT time-series data like forecasting.",
"title": ""
},
{
"docid": "1d3318884ffe201e50312b68bf51956a",
"text": "This paper explores alternate algorithms, reward functions and feature sets for performing multi-document summarization using reinforcement learning with a high focus on reproducibility. We show that ROUGE results can be improved using a unigram and bigram similarity metric when training a learner to select sentences for summarization. Learners are trained to summarize document clusters based on various algorithms and reward functions and then evaluated using ROUGE. Our experiments show a statistically significant improvement of 1.33%, 1.58%, and 2.25% for ROUGE-1, ROUGE-2 and ROUGEL scores, respectively, when compared with the performance of the state of the art in automatic summarization with reinforcement learning on the DUC2004 dataset. Furthermore query focused extensions of our approach show an improvement of 1.37% and 2.31% for ROUGE-2 and ROUGE-SU4 respectively over query focused extensions of the state of the art with reinforcement learning on the DUC2006 dataset.",
"title": ""
},
{
"docid": "bc42c1e0bc130ea41af09db0d3ec0c8d",
"text": "In Western societies, the population grows old, and we must think about solutions to help them to stay at home in a secure environment. By providing a specific analysis of people behavior, computer vision offers a good solution for healthcare systems, and particularly for fall detection. This demo will show the results of a new method to detect falls using a monocular camera. The main characteristic of this method is the use of head 3D trajectories for fall detection.",
"title": ""
},
{
"docid": "ec37e61fcac2639fa6e605b362f2a08d",
"text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.",
"title": ""
}
] |
scidocsrr
|
670089b7b19ec3fd4d3c5a3551b9e38d
|
A culturally and linguistically responsive vocabulary approach for young Latino dual language learners.
|
[
{
"docid": "e9477e72249764e28945e4bc3a7e6b1e",
"text": "English language learners (ELLs) who experience slow vocabulary development are less able to comprehend text at grade level than their English-only peers. Such students are likely to perform poorly on assessments in these areas and are at risk of being diagnosed as learning disabled. In this article, we review the research on methods to develop the vocabulary knowledge of ELLs and present lessons learned from the research concerning effective instructional practices for ELLs. The review suggests that several strategies are especially valuable for ELLs, including taking advantage of students’ first language if the language shares cognates with English; ensuring that ELLs know the meaning of basic words, and providing sufficient review and reinforcement. Finally, we discuss challenges in designing effective vocabulary instruction for ELLs. Important issues are determining which words to teach, taking into account the large deficits in second-language vocabulary of ELLs, and working with the limited time that is typically available for direct instruction in vocabulary.",
"title": ""
}
] |
[
{
"docid": "cb4f78047b92b773bc30509ca80438a4",
"text": "In this article, we exploit the problem of annotating a large-scale image corpus by label propagation over noisily tagged web images. To annotate the images more accurately, we propose a novel kNN-sparse graph-based semi-supervised learning approach for harnessing the labeled and unlabeled data simultaneously. The sparse graph constructed by datum-wise one-vs-kNN sparse reconstructions of all samples can remove most of the semantically unrelated links among the data, and thus it is more robust and discriminative than the conventional graphs. Meanwhile, we apply the approximate k nearest neighbors to accelerate the sparse graph construction without loosing its effectiveness. More importantly, we propose an effective training label refinement strategy within this graph-based learning framework to handle the noise in the training labels, by bringing in a dual regularization for both the quantity and sparsity of the noise. We conduct extensive experiments on a real-world image database consisting of 55,615 Flickr images and noisily tagged training labels. The results demonstrate both the effectiveness and efficiency of the proposed approach and its capability to deal with the noise in the training labels.",
"title": ""
},
{
"docid": "8c63ce71aaa0409372efeb3ea392394f",
"text": "This paper describes the application of evolutionary fuzzy systems for subgroup discovery to a medical problem, the study on the type of patients who tend to visit the psychiatric emergency department in a given period of time of the day. In this problem, the objective is to characterise subgroups of patients according to their time of arrival at the emergency department. To solve this problem, several subgroup discovery algorithms have been applied to determine which of them obtains better results. The multiobjective evolutionary algorithm MESDIF for the extraction of fuzzy rules obtains better results and so it has been used to extract interesting information regarding the rate of admission to the psychiatric emergency department.",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "40ba65504518383b4ca2a6fabff261fe",
"text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial",
"title": ""
},
{
"docid": "74ea9bde4e265dba15cf9911fce51ece",
"text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.",
"title": ""
},
{
"docid": "7fbc78aead9d65201d921c828b6396cd",
"text": "In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control systemwith a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goaldirected behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goaldirected behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "31d2e56c01f53c25c6c9bfcabe21fcbe",
"text": "In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.",
"title": ""
},
{
"docid": "78744205cf17be3ee5a61d12e6a44180",
"text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.",
"title": ""
},
{
"docid": "519ca18e1450581eb3a7387568dce7cf",
"text": "This paper illustrates the design of a process compensated bias for asynchronous CML dividers for a low power, high performance LO divide chain operating at 4Ghz of input RF frequency. The divider chain provides division by 4,8,12,16,20, and 24. It provides a differential CML level signal for the in-loop modulated transmitter, and 25% duty cycle non-overlapping rail to rail waveforms for I/Q receiver for driving passive mixer. Asynchronous dividers have been used to realize divide by 3 and 5 with 50% duty cycle, quadrature outputs. All the CML dividers use a process compensated bias to compensate for load resistor variation and tail current variation using dual analog feedback loops. Frabricated in 180nm CMOS technology, the divider chain operate over industrial temperature range (−40 to 90°C), and provide outputs in 138–960Mhz range, consuming 2.2mA from 1.8V regulated supply at the highest output frequency.",
"title": ""
},
{
"docid": "36b232e486ee4c9885a51a1aefc8f12b",
"text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.",
"title": ""
},
{
"docid": "12d564ad22b33ee38078f18a95ed670f",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER.",
"title": ""
},
{
"docid": "256376e1867ee923ff72d3376c3be918",
"text": "Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.",
"title": ""
},
{
"docid": "9e359f0d7df4e35c934ce01bf5619622",
"text": "This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.",
"title": ""
},
{
"docid": "67ba6914f8d1a50b7da5024567bc5936",
"text": "Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.",
"title": ""
},
{
"docid": "ae5bf888ce9a61981be60b9db6fc2d9c",
"text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.",
"title": ""
},
{
"docid": "03aec14861b2b1b4e6f091dc77913a5b",
"text": "Taxonomy is indispensable in understanding natural language. A variety of large scale, usage-based, data-driven lexical taxonomies have been constructed in recent years. Hypernym-hyponym relationship, which is considered as the backbone of lexical taxonomies can not only be used to categorize the data but also enables generalization. In particular, we focus on one of the most prominent properties of the hypernym-hyponym relationship, namely, transitivity, which has a significant implication for many applications. We show that, unlike human crafted ontologies and taxonomies, transitivity does not always hold in data-driven lexical taxonomies. We introduce a supervised approach to detect whether transitivity holds for any given pair of hypernym-hyponym relationships. Besides solving the inferencing problem, we also use the transitivity to derive new hypernym-hyponym relationships for data-driven lexical taxonomies. We conduct extensive experiments to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "d284fff9eed5e5a332bb3cfc612a081a",
"text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.",
"title": ""
},
{
"docid": "3ff58e78ac9fe623e53743ad05248a30",
"text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.",
"title": ""
}
] |
scidocsrr
|
fed9694336c6085ed06a590e0c821402
|
New Simple-Structured AC Solid-State Circuit Breaker
|
[
{
"docid": "6af7f70f0c9b752d3dbbe701cb9ede2a",
"text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions",
"title": ""
},
{
"docid": "d8255047dc2e28707d711f6d6ff19e30",
"text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.",
"title": ""
}
] |
[
{
"docid": "d38f9ef3248bf54b7a073beaa186ad42",
"text": "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "99ba1fd6c96dad6d165c4149ac2ce27a",
"text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.",
"title": ""
},
{
"docid": "04d7b3e3584d89d5a3bc5c22c3fd1438",
"text": "With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.",
"title": ""
},
{
"docid": "0742314b8099dce0eadaa12f96579209",
"text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.",
"title": ""
},
{
"docid": "38d7107de35f3907c0e42b111883613e",
"text": "On-line social networks have become a massive communication and information channel for users world-wide. In particular, the microblogging platform Twitter, is characterized by short-text message exchanges at extremely high rates. In this type of scenario, the detection of emerging topics in text streams becomes an important research area, essential for identifying relevant new conversation topics, such as breaking news and trends. Although emerging topic detection in text is a well established research area, its application to large volumes of streaming text data is quite novel. Making scalability, efficiency and rapidness, the key aspects for any emerging topic detection algorithm in this type of environment.\n Our research addresses the aforementioned problem by focusing on detecting significant and unusual bursts in keyword arrival rates or bursty keywords. We propose a scalable and fast on-line method that uses normalized individual frequency signals per term and a windowing variation technique. This method reports keyword bursts which can be composed of single or multiple terms, ranked according to their importance. The average complexity of our method is O(n log n), where n is the number of messages in the time window. This complexity allows our approach to be scalable for large streaming datasets. If bursts are only detected and not ranked, the algorithm remains with lineal complexity O(n), making it the fastest in comparison to the current state-of-the-art. We validate our approach by comparing our performance to similar systems using the TREC Tweet 2011 Challenge tweets, obtaining 91% of matches with LDA, an off-line gold standard used in similar evaluations. In addition, we study Twitter messages related to the SuperBowl football events in 2011 and 2013.",
"title": ""
},
{
"docid": "c69d15a44bcb779394df5776e391ec23",
"text": "Ankylosing spondylitis (AS) is a chronic and inflammatory rheumatic disease, characterized by pain and structural and functional impairments, such as reduced mobility and axial deformity, which lead to diminished quality of life. Its treatment includes not only drugs, but also nonpharmacological therapy. Exercise appears to be a promising modality. The aim of this study is to review the current evidence and evaluate the role of exercise either on land or in water for the management of patients with AS in the biological era. Systematic review of the literature published until November 2016 in Medline, Embase, Cochrane Library, Web of Science and Scopus databases. Thirty-five studies were included for further analysis (30 concerning land exercise and 5 concerning water exercise; combined or not with biological drugs), comprising a total of 2515 patients. Most studies showed a positive effect of exercise on Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, pain, mobility, function and quality of life. The benefit was statistically significant in randomized controlled trials. Results support a multimodal approach, including educational sessions and maintaining home-based program. This study highlights the important role of exercise in management of AS, therefore it should be encouraged and individually prescribed. More studies with good methodological quality are needed to strengthen the results and to define the specific characteristics of exercise programs that determine better results.",
"title": ""
},
{
"docid": "699836a5b2caf6acde02c4bad16c2795",
"text": "Drilling end-effector is a key unit in autonomous drilling robot. The perpendicularity of the hole has an important influence on the quality of airplane assembly. Aiming at the robot drilling perpendicularity, a micro-adjusting attitude mechanism and a surface normal measurement algorithm are proposed in this paper. In the mechanism, two rounded eccentric discs are used and the small one is embedded in the big one, which makes the drill’s point static when adjusting the drill’s attitude. Thus, removal of drill’s point position after adjusting the drill attitude can be avoided. Before the micro-adjusting progress, four non-coplanar points in space are used to determine a unique sphere. The normal at the drilling point is measured by four laser ranging sensors. The adjusting angles at which the motors should be rotated to adjust attitude can be calculated by using the deviation between the normal and the drill axis. Finally, the motors will drive the two eccentric discs to achieve micro-adjusting progress. Experiments on drilling robot system and the results demonstrate that the adjusting mechanism and the algorithm for surface normal measurement are effective with high accuracy and efficiency. (1)设计一种微型姿态调整机构, 实现对钻头姿态进行调整, 使其沿制孔点法线进行制孔, 提高孔的垂直度. 使得钻头调整前后, 钻头顶点保持不变, 提高制孔效率. (2)利用4个激光测距传感器, 根据空间不共面四点确定唯一球, 测得制孔点处的法线向量, 为钻头的姿态调整做准备.",
"title": ""
},
{
"docid": "a05a953097e5081670f26e85c4b8e397",
"text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.",
"title": ""
},
{
"docid": "80666930dbabe1cd9d65af762cc4b150",
"text": "Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "53ae229e708297bf73cf3a33b32e42da",
"text": "Signal-dependent phase variation, AM/PM, along with amplitude variation, AM/AM, are known to determine nonlinear distortion characteristics of current-mode PAs. However, these distortion effects have been treated separately, putting more weight on the amplitude distortion, while the AM/PM generation mechanisms are yet to be fully understood. Hence, the aim of this work is to present a large-signal physical model that can describe both the AM/AM and AM/PM PA nonlinear distortion characteristics and their internal relationship.",
"title": ""
},
{
"docid": "c6d25017a6cba404922933672a18d08a",
"text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.",
"title": ""
},
{
"docid": "e33fd686860657a93a0e47807b4cbe24",
"text": "Planning optimal paths for large numbers of robots is computationally expensive. In this thesis, we present a new framework for multirobot path planning called subdimensional expansion, which initially plans for each robot individually, and then coordinates motion among the robots as needed. More specifically, subdimensional expansion initially creates a one-dimensional search space embedded in the joint configuration space of the multirobot system. When the search space is found to be blocked during planning by a robot-robot collision, the dimensionality of the search space is locally increased to ensure that an alternative path can be found. As a result, robots are only coordinated when necessary, which reduces the computational cost of finding a path. Subdimensional expansion is a flexible framework that can be used with multiple planning algorithms. For discrete planning problems, subdimensional expansion can be combined with A* to produce the M* algorithm, a complete and optimal multirobot path planning problem. When the configuration space of individual robots is too large to be explored effectively with A*, subdimensional expansion can be combined with probabilistic planning algorithms to produce sRRT and sPRM. M* is then extended to solve variants of the multirobot path planning algorithm. We present the Constraint Manifold Subsearch (CMS) algorithm to solve problems where robots must dynamically form and dissolve teams with other robots to perform cooperative tasks. Uncertainty M* (UM*) is a variant of M* that handles systems with probabilistic dynamics. Finally, we apply M* to multirobot sequential composition. Results are validated with extensive simulations and experiments on multiple physical robots.",
"title": ""
},
{
"docid": "73d31d63cfaeba5fa7c2d2acc4044ca0",
"text": "Plastics in the marine environment have become a major concern because of their persistence at sea, and adverse consequences to marine life and potentially human health. Implementing mitigation strategies requires an understanding and quantification of marine plastic sources, taking spatial and temporal variability into account. Here we present a global model of plastic inputs from rivers into oceans based on waste management, population density and hydrological information. Our model is calibrated against measurements available in the literature. We estimate that between 1.15 and 2.41 million tonnes of plastic waste currently enters the ocean every year from rivers, with over 74% of emissions occurring between May and October. The top 20 polluting rivers, mostly located in Asia, account for 67% of the global total. The findings of this study provide baseline data for ocean plastic mass balance exercises, and assist in prioritizing future plastic debris monitoring and mitigation strategies.",
"title": ""
},
{
"docid": "e3853e259c3ae6739dcae3143e2074a8",
"text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.",
"title": ""
},
{
"docid": "f160dd844c54dafc8c5265ff0e4d4a05",
"text": "The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all stakeholders in m-payment ecosystem. In this paper, the STOF business model framework is employed to analyze m-payment services from the point of view of one of the key players in the ecosystem i.e., banks. We apply Analytic Hierarchy Process (AHP) method to analyze the critical design issues for four domains of the STOF model. The results of the analysis show that service domain is the most important, followed by technology, organization and finance domains. Security related issues are found to be the most important by bank representatives. The future research can be extended to the m-payment ecosystem by collecting data from different actors from the ecosystem.",
"title": ""
},
{
"docid": "f3d0ae1db485b95b8b6931f8c6f2ea40",
"text": "Spoken language understanding (SLU) is a core component of a spoken dialogue system. In the traditional architecture of dialogue systems, the SLU component treats each utterance independent of each other, and then the following components aggregate the multi-turn information in the separate phases. However, there are two challenges: 1) errors from previous turns may be propagated and then degrade the performance of the current turn; 2) knowledge mentioned in the long history may not be carried into the current turn. This paper addresses the above issues by proposing an architecture using end-to-end memory networks to model knowledge carryover in multi-turn conversations, where utterances encoded with intents and slots can be stored as embeddings in the memory and the decoding phase applies an attention model to leverage previously stored semantics for intent prediction and slot tagging simultaneously. The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.",
"title": ""
},
{
"docid": "b2283fb23a199dbfec42b76dec31ac69",
"text": "High accurate indoor localization and tracking of smart phones is critical to pervasive applications. Most radio-based solutions either exploit some error prone power-distance models or require some labor-intensive process of site survey to construct RSS fingerprint database. This study offers a new perspective to exploit RSS readings by their contrast relationship rather than absolute values, leading to three observations and functions called turn verifying, room distinguishing and entrance discovering. On this basis, we design WaP (WiFi-Assisted Particle filter), an indoor localization and tracking system exploiting particle filters to combine dead reckoning, RSS-based analyzing and knowledge of floor plan together. All the prerequisites of WaP are the floor plan and the coarse locations on which room the APs reside. WaP prototype is realized on off-the-shelf smartphones with limited particle number typically 400, and validated in a college building covering 1362m2. Experiment results show that WaP can achieve average localization error of 0.71m for 100 trajectories by 8 pedestrians.",
"title": ""
},
{
"docid": "10634117fd51d94f9b12b9f0ed034f65",
"text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.",
"title": ""
},
{
"docid": "1840d879044662bfb1e6b2ea3ee9c2c8",
"text": "Working memory (WM) training has been reported to benefit abilities as diverse as fluid intelligence (Jaeggi et al., Proceedings of the National Academy of Sciences of the United States of America, 105:6829-6833, 2008) and reading comprehension (Chein & Morrison, Psychonomic Bulletin & Review, 17:193-199, 2010), but transfer is not always observed (for reviews, see Morrison & Chein, Psychonomics Bulletin & Review, 18:46-60, 2011; Shipstead et al., Psychological Bulletin, 138:628-654, 2012). In contrast, recent WM training studies have consistently reported improvement on the trained tasks. The basis for these training benefits has received little attention, however, and it is not known which WM components and/or processes are being improved. Therefore, the goal of the present study was to investigate five possible mechanisms underlying the effects of adaptive dual n-back training on working memory (i.e., improvements in executive attention, updating, and focus switching, as well as increases in the capacity of the focus of attention and short-term memory). In addition to a no-contact control group, the present study also included an active control group whose members received nonadaptive training on the same task. All three groups showed significant improvements on the n-back task from pretest to posttest, but adaptive training produced larger improvements than did nonadaptive training, which in turn produced larger improvements than simply retesting. Adaptive, but not nonadaptive, training also resulted in improvements on an untrained running span task that measured the capacity of the focus of attention. No other differential improvements were observed, suggesting that increases in the capacity of the focus of attention underlie the benefits of adaptive dual n-back training.",
"title": ""
}
] |
scidocsrr
|
bd404c364c2400990168678acf70ae6f
|
Change-Point Detection in Time-Series Data Based on Subspace Identification
|
[
{
"docid": "dca74df16e3a90726d51b3222483ac94",
"text": "We are concerned with the issue of detecting outliers and change points from time series. In the area of data mining, there have been increased interest in these issues since outlier detection is related to fraud detection, rare event discovery, etc., while change-point detection is related to event/trend change detection, activity monitoring, etc. Although, in most previous work, outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them. In this framework, a probabilistic model of time series is incrementally learned using an online discounting learning algorithm, which can track a drifting data source adaptively by forgetting out-of-date statistics gradually. A score for any given data is calculated in terms of its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. By taking an average of the scores over a window of a fixed length and sliding the window, we may obtain a new time series consisting of moving-averaged scores. Change point detection is then reduced to the issue of detecting outliers in that time series. We compare the performance of our framework with those of conventional methods to demonstrate its validity through simulation and experimental applications to incidents detection in network security.",
"title": ""
},
{
"docid": "0d41a6d4cf8c42ccf58bccd232a46543",
"text": "Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.",
"title": ""
}
] |
[
{
"docid": "3dcb93232121be1ff8a2d96ecb25bbdd",
"text": "We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.",
"title": ""
},
{
"docid": "c8ba829a6b0e158d1945bbb0ed68045b",
"text": "Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.",
"title": ""
},
{
"docid": "391cce3ac9ab87e31203637d89a8a082",
"text": "MicroRNAs (miRNAs) are small conserved non-coding RNA molecules that post-transcriptionally regulate gene expression by targeting the 3' untranslated region (UTR) of specific messenger RNAs (mRNAs) for degradation or translational repression. miRNA-mediated gene regulation is critical for normal cellular functions such as the cell cycle, differentiation, and apoptosis, and as much as one-third of human mRNAs may be miRNA targets. Emerging evidence has demonstrated that miRNAs play a vital role in the regulation of immunological functions and the prevention of autoimmunity. Here we review the many newly discovered roles of miRNA regulation in immune functions and in the development of autoimmunity and autoimmune disease. Specifically, we discuss the involvement of miRNA regulation in innate and adaptive immune responses, immune cell development, T regulatory cell stability and function, and differential miRNA expression in rheumatoid arthritis and systemic lupus erythematosus.",
"title": ""
},
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8cd701723c72b16dfe7d321cb657ee31",
"text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.",
"title": ""
},
{
"docid": "f3811a34b2abd34d20e24e90ab9fe046",
"text": "Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result.",
"title": ""
},
{
"docid": "6485211d35cef2766675d78311864ff0",
"text": "In this paper, we investigate architectural and practical issues related to the setup of a broadband home network solution. Our experience led us to the consideration of a hybrid, wireless and wired, Mesh-Network to enable high data rate service delivery everywhere in the home. We demonstrate the effectiveness of our proposal using a real experimental testbed. This latter consists of a multi-hop mesh network composed of a home gateway and \"extenders\" supporting several types of physical connectivity including PLC, WiFi, and Ethernet. The solution also includes a layer 2 implementation of the OLSR protocol for path selection. We developed an extension of this protocol for QoS assurance and to enable the proper execution of existing services. We have also implemented a fast WiFi handover algorithm to ensure service continuity in case of user mobility among the extenders inside the home.",
"title": ""
},
{
"docid": "4ab3db4b0c338dbe8d5bb9e1f49f2a5c",
"text": "BACKGROUND\nSub-Saharan African (SSA) countries are currently experiencing one of the most rapid epidemiological transitions characterized by increasing urbanization and changing lifestyle factors. This has resulted in an increase in the incidence of non-communicable diseases, especially cardiovascular disease (CVD). This double burden of communicable and chronic non-communicable diseases has long-term public health impact as it undermines healthcare systems.\n\n\nPURPOSE\nThe purpose of this paper is to explore the socio-cultural context of CVD risk prevention and treatment in sub-Saharan Africa. We discuss risk factors specific to the SSA context, including poverty, urbanization, developing healthcare systems, traditional healing, lifestyle and socio-cultural factors.\n\n\nMETHODOLOGY\nWe conducted a search on African Journals On-Line, Medline, PubMed, and PsycINFO databases using combinations of the key country/geographic terms, disease and risk factor specific terms such as \"diabetes and Congo\" and \"hypertension and Nigeria\". Research articles on clinical trials were excluded from this overview. Contrarily, articles that reported prevalence and incidence data on CVD risk and/or articles that report on CVD risk-related beliefs and behaviors were included. Both qualitative and quantitative articles were included.\n\n\nRESULTS\nThe epidemic of CVD in SSA is driven by multiple factors working collectively. Lifestyle factors such as diet, exercise and smoking contribute to the increasing rates of CVD in SSA. Some lifestyle factors are considered gendered in that some are salient for women and others for men. For instance, obesity is a predominant risk factor for women compared to men, but smoking still remains mostly a risk factor for men. Additionally, structural and system level issues such as lack of infrastructure for healthcare, urbanization, poverty and lack of government programs also drive this epidemic and hampers proper prevention, surveillance and treatment efforts.\n\n\nCONCLUSION\nUsing an African-centered cultural framework, the PEN3 model, we explore future directions and efforts to address the epidemic of CVD risk in SSA.",
"title": ""
},
{
"docid": "10b0ab2570a7bba1ac1f575a0555eb4a",
"text": "It is well known that ozone concentration depends on air/oxygen input flow rate and power consumed by the ozone chamber. For every chamber, there exists a unique optimum flow rate that results in maximum ozone concentration. If the flow rate is increased (beyond) or decreased (below) from this optimum value, the ozone concentration drops. This paper proposes a technique whereby the concentration can be maintained even if the flow rate increases. The idea is to connect n number of ozone chambers in parallel, with each chamber designed to operate at its optimum point. Aside from delivering high ozone concentration at high flow rate, the proposed system requires only one power supply to drive all these (multiple) chambers simultaneously. In addition, due to its modularity, the system is very flexible, i.e., the number of chambers can be added or removed as demanded by the (output) ozone requirements. This paper outlines the chamber design using mica as dielectric and the determination of its parameters. To verify the concept, three chambers are connected in parallel and driven by a single transformer-less LCL resonant power supply. Moreover, a closed-loop feedback controller is implemented to ensure that the voltage gain remains at the designated value even if the number of chambers is changed or there is a variation in the components. It is shown that the flow rate can be increased linearly with the number of chambers while maintaining a constant ozone concentration.",
"title": ""
},
{
"docid": "e0382c9d739281b4bc78f4a69827ac37",
"text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.",
"title": ""
},
{
"docid": "81d07b747f12f10066571c784e212991",
"text": "This work presents a bi-arm rolled monopole for ultrawide-band (UWB) applications. The roll monopole is constructed by wrapping a planar monopole. The impedance and radiation characteristics of the proposed roll monopole are experimentally compared with a rectangular planar monopole and strip monopole. Furthermore, the transfer responses of transmit-receive antenna systems comprising two identical monopoles are examined across the UWB band. The characteristics of the monopoles are investigated in both time and frequency domains for UWB single-band and multiple-band schemes. The study shows that the proposed bi-arm rolled monopole is capable of achieving broadband and omnidirectional radiation characteristics within 3.1-10.6 GHz for UWB wireless communications.",
"title": ""
},
{
"docid": "a2d851b76d6abcb3d9377c566b8bf6d9",
"text": "Many fabrication processes for polymeric objects include melt extrusion, in which the molten polymer is conveyed by a ram or a screw and the melt is then forced through a shaping die in continuous processing or into a mold for the manufacture of discrete molded parts. The properties of the fabricated solid object, including morphology developed during cooling and solidification, depend in part on the stresses and orientation induced during the melt shaping. Most polymers used for commercial processing are of sufficiently high molecular weight that the polymer chains are highly entangled in the melt, resulting in flow behavior that differs qualitatively from that of low-molecular-weight liquids. Obvious manifestations of the differences from classical Newtonian fluids are a strongly shear-dependent viscosity and finite stresses normal to the direction of shear in rectilinear flow, transients of the order of seconds for the buildup or relaxation of stresses following a change in shear rate, a finite phase angle between stress and shear rate in oscillatory shear, ratios of extensional to shear viscosities that are considerably greater than 3, and substantial extrudate swell on extrusion from a capillary or slit. These rheological characteristics of molten polymers have been reviewed in textbooks (e.g. Larson 1999, Macosko 1994); the recent research emphasis in rheology has been to establish meaningful constitutive models that incorporate chain behavior at a molecular level. All polymer melts and concentrated solutions exhibit instabilities during extrusion when the stresses to which they are subjected become sufficiently high. The first manifestation of extrusion instability is usually the appearance of distortions on the extrudate surface, sometimes accompanied by oscillating flow. Gross distortion of the extrudate usually follows. The sequence of extrudate distortions",
"title": ""
},
{
"docid": "5a0cf2582fab28fe07d215435632b610",
"text": "5G radio access networks are expected to provide very high capacity, ultra-reliability and low latency, seamless mobility, and ubiquitous end-user experience anywhere and anytime. Driven by such stringent service requirements coupled with the expected dense deployments and diverse use case scenarios, the architecture of 5G New Radio (NR) wireless access has further evolved from the traditionally cell-centric radio access to a more flexible beam-based user-centric radio access. This article provides an overview of the NR system multi-beam operation in terms of initial access procedures and mechanisms associated with synchronization, system information, and random access. We further discuss inter-cell mobility handling in NR and its reliance on new downlink-based measurements to compensate for a lack of always-on reference signals in NR. Furthermore, we describe some of the user-centric coordinated transmission mechanisms envisioned in NR in order to realize seamless intra/inter-cell handover between physical transmission and reception points and reduce the interference levels across the network.",
"title": ""
},
{
"docid": "5e840c5649492d5e93ddef2b94432d5f",
"text": "Commercially available laser lithography systems have been available for several years. One such system manufactured by Heidelberg Instruments can be used to produce masks for lithography or to directly pattern photoresist using either a 3 micron or 1 micron beam. These systems are designed to operate using computer aided design (CAD) mask files, but also have the capability of using images. In image mode, the power of the exposure is based on the intensity of each pixel in the image. This results in individual pixels that are the size of the beam, which establishes the smallest feature that can be patterned. When developed, this produces a range of heights within the photoresist which can then be transferred to the material beneath and used for a variety of applications. Previous research efforts have demonstrated that this process works well overall, but is limited in resolution and feature size due to the pixel approach of the exposure. However, if we modify the method used, much smaller features can be resolved, without the pixilation. This is achieved by utilizing multiple exposures of slightly different CAD type files in sequence. While the smallest beam width is approximately 1 micron, the beam positioning accuracy is much smaller, with 40 nm step changes in beam position based on the machine's servo gearing and optical design. When exposing in CAD mode, the beam travels along lines at constant power, so by automating multiple files in succession, and employing multiple smaller exposures of lower intensity, a similar result can be achieved. With this line exposure approach, pixilation can be greatly reduced. Due to the beam positioning accuracy of this mode, the effective resolution between lines is on the order of 40 nm steps, resulting in unexposed features of much smaller size and higher resolution.",
"title": ""
},
{
"docid": "01ee1036caeb4a64477aa19d0f8a6429",
"text": "In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "6392a6c384613f8ed9630c8676f0cad8",
"text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN",
"title": ""
},
{
"docid": "9e638e09b77463e8c232c7960d49a544",
"text": "Force feedback coupled with visual display allows people to interact intuitively with complex virtual environments. For this synergy of haptics and graphics to flourish, however, haptic systems must be capable of modeling environments with the same richness, complexity and interactivity that can be found in existing graphic systems. To help meet this challenge, we have developed a haptic rendering system that allows f r the efficient tactile display of graphical information. The system uses a common high-level framework to model contact constraints, surface shading, friction and tex ture. The multilevel control system also helps ensure that the haptic device will remain stable even as the limits of the renderer’s capabilities are reached. CR",
"title": ""
}
] |
scidocsrr
|
7f2303e532cd188758f34799820759d4
|
RUN: Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection
|
[
{
"docid": "bf85db5489a61b5fca8d121de198be97",
"text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.",
"title": ""
}
] |
[
{
"docid": "0ab6ee50661e92fe7935ddd2c447f793",
"text": "In this paper, a high-performance single-phase transformerless online uninterruptible power supply (UPS) is proposed. The proposed UPS is composed of a four-leg-type converter, which operates as a rectifier, a battery charger/discharger, and an inverter. The rectifier has the capability of power-factor collection and regulates a constant dc-link voltage. The battery charger/discharger eliminates the need for the transformer and the increase of the number of battery and supplies the power demanded by the load to the dc-link capacitor in the event of the input-power failure or abrupt decrease of the input voltage. The inverter provides a regulated sinusoidal output voltage to the load and limits the output current under an impulsive load. The control of the dc-link voltage enhances the transient response of the output voltage and the utilization of the input power. By utilizing the battery charger/discharger, the overall efficiency of the system is improved, and the size, weight, and cost of the system are significantly reduced. Experimental results obtained with a 3-kVA prototype show a normal efficiency of over 95.6% and an input power factor of over 99.7%.",
"title": ""
},
{
"docid": "84d2e697b2f2107d34516909f22768c6",
"text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.",
"title": ""
},
{
"docid": "b93919bbb2dab3a687cccb71ee515793",
"text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.",
"title": ""
},
{
"docid": "3d25100e6a9410c6c08fae14135043d0",
"text": "We propose to learn semantic spatio-temporal embeddings for videos to support high-level video analysis. The first step of the proposed embedding employs a deep architecture consisting of two channels of convolutional neural networks (capturing appearance and local motion) followed by their corresponding Gated Recurrent Unit encoders for capturing longer-term temporal structure of the CNN features. The resultant spatio-temporal representation (a vector) is used to learn a mapping via a multilayer perceptron to the word2vec semantic embedding space, leading to a semantic interpretation of the video vector that supports high-level analysis. We demonstrate the usefulness and effectiveness of this new video representation by experiments on action recognition, zero-shot video classification, and “word-to-video” retrieval, using the UCF-101 dataset.",
"title": ""
},
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "7263e768247914490f3b91c916587614",
"text": "Activity Recognition is an emerging field of research, born from the larger fields of ubiquitous computing, context-aware computing and multimedia. Recently, recognizing everyday life activities becomes one of the challenges for pervasive computing. In our work, we developed a novel wearable system easy to use and comfortable to bring. Our wearable system is based on a new set of 20 computationally efficient features and the Random Forest classifier. We obtain very encouraging results with classification accuracy of human activities recognition of up",
"title": ""
},
{
"docid": "de3789fe0dccb53fe8555e039fde1bc6",
"text": "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individuallevel observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.",
"title": ""
},
{
"docid": "47e9515f703c840c38ab0c3095f48a3a",
"text": "Hnefatafl is an ancient Norse game - an ancestor of chess. In this paper, we report on the development of computer players for this game. In the spirit of Blondie24, we evolve neural networks as board evaluation functions for different versions of the game. An unusual aspect of this game is that there is no general agreement on the rules: it is no longer much played, and game historians attempt to infer the rules from scraps of historical texts, with ambiguities often resolved on gut feeling as to what the rules must have been in order to achieve a balanced game. We offer the evolutionary method as a means by which to judge the merits of alternative rule sets",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
},
{
"docid": "ac56eb533e3ae40b8300d4269fd2c08f",
"text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "9e439c83f4c29b870b1716ceae5aa1f3",
"text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller",
"title": ""
},
{
"docid": "e25b5b0f51f9c00515a849f5fd05d39b",
"text": "These are exciting times for research into the psychological processes underlying second language acquisition (SLA). In the 1970s, SLA emerged as a field of inquiry in its own right (Brown 1980), and in the 1980s, a number of different approaches to central questions in the field began to develop in parallel and in relative isolation (McLaughlin and Harrington 1990). In the 1990s, however, these different approaches began to confront one another directly. Now we are entering a period reminiscent, in many ways, of the intellectually turbulent times following the Chomskyan revolution (Chomsky 1957; 1965). Now, as then, researchers are debating basic premises of a science of mind, language, and learning. Some might complain, not entirely without reason, that we are still debating the same issues after 30-40 years. However, there are now new conceptual and research tools available to test hypotheses in ways previously thought impossible. Because of this, many psychologists believe there will soon be significant advancement on some SLA issues that have resisted closure for decades. We outline some of these developments and explore where the field may be heading. More than ever, it appears possible that psychological theory and SLA theory are converging on solutions to common issues.",
"title": ""
},
{
"docid": "f57bcea5431a11cc431f76727ba81a26",
"text": "We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. This approach is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of the legislators’ voting histories, or the number of roll calls available for analysis. The model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, and the evolution of the legislative agenda; this is especially helpful since generally it is inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternate assumptions (e.g., log-rolling, party discipline). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting models of legislative behavior. Our goal is to provide a statistical framework for combining the measurement of legislative preferences with tests of models of legislative behavior.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "f2346fffa0297554440145a3165e921e",
"text": "The proliferation of knowledge-sharing communities like Wikipedia and the advances in automated information extraction from Web pages enable the construction of large knowledge bases with facts about entities and their relationships. The facts can be represented in the RDF data model, as so-called subject-property-object triples, and can thus be queried by structured query languages like SPARQL. In principle, this allows precise querying in the database spirit. However, RDF data may be highly diverse and queries may return way too many results, so that ranking by informativeness measures is crucial to avoid overwhelming users. Moreover, as facts are extracted from textual contexts or have community-provided annotations, it can be beneficial to consider also keywords for formulating search requests. This paper gives an overview of recent and ongoing work on ranked retrieval of RDF data with keyword-augmented structured queries. The ranking method is based on statistical language models, the state-of-the-art paradigm in information retrieval. The paper develops a novel form of language models for the structured, but schema-less setting of RDF triples and extended SPARQL queries. 1 Motivation and Background Entity-Relationship graphs are receiving great attention for information management outside of mainstream database engines. In particular, the Semantic-Web data model RDF (Resource Description Format) is gaining popularity for applications on scientific data such as biological networks [14], social Web2.0 applications [4], large-scale knowledge bases such as DBpedia [2] or YAGO [13], and more generally, as a light-weight representation for the “Web of data” [5]. An RDF data collection consists of a set of subject-property-object triples, SPO triples for short. In ER terminology, an SPO triple corresponds to a pair of entities connected by a named relationship or to an entity connected to the value of a named attribute. As the object of a triple can in turn be the subject of other triples, we can also view the RDF data as a graph of typed nodes and typed edges where nodes correspond to entities and edges to relationships (viewing attributes as relations as well). Some of the existing RDF collections contain more than a billion triples. As a simple example that we will use throughout the paper, consider a Web portal on movies. Table 1 shows a few sample triples. The example illustrates a number of specific requirements that RDF data poses for querying: Copyright 0000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering",
"title": ""
},
{
"docid": "e9c4877bca5f1bfe51f97818cc4714fa",
"text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using",
"title": ""
},
{
"docid": "019375c14bc0377acbf259ef423fa46f",
"text": "Original approval signatures are on file with the University of Oregon Graduate School.",
"title": ""
},
{
"docid": "78ced4f3e99c5abc1a3f5e81fbc63106",
"text": "This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles.",
"title": ""
}
] |
scidocsrr
|
3fa91b18b304566a526737057d5b115b
|
Attentional convolutional neural networks for object tracking
|
[
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "dacebd3415ec50ca6c74e28048fe6fc8",
"text": "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.",
"title": ""
},
{
"docid": "83f1830c3a9a92eb3492f9157adaa504",
"text": "We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real-world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains several samples of not only the states of the target but also the trackers themselves during the sampling process. The trackers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by proposing new appearance models, motion models, state representation types, and observation types, which are the basic important components of visual trackers. Then, the sampled trackers run in parallel and interact with each other while covering various target variations efficiently. The experiment demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.",
"title": ""
}
] |
[
{
"docid": "17ac85242f7ee4bc4991e54403e827c4",
"text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.",
"title": ""
},
{
"docid": "4f069eeff7cf99679fb6f31e2eea55f0",
"text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization",
"title": ""
},
{
"docid": "8fffe94d662d46b977e0312dc790f4a4",
"text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3dd8c177ae928f7ccad2aa980bd8c747",
"text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.",
"title": ""
},
{
"docid": "ab5e3f7ad73d8143ae4dc4db40ebfade",
"text": "Knowledge is an essential organizational resource that provides a sustainable competitive advantage in a highly competitive and dynamic economy. SMEs must therefore consider how to promote the sharing of knowledge and expertise between experts who possess it and novices who need to know. Thus, they need to emphaisze and more effectively exploit knowledge-based resources that already exist within the firm. A key issue for the failure of any KM initiative to facilitate knowledge sharing is the lack of consideration of how the organizational and interpersonal context as well as individual characteristics influence knowledge sharing behaviors. Due to the potential benefits that could be realized from knowledge sharing, this study focused on knowledge sharing as one fundamental knowledge-centered activity. Based on the review of previous literature regarding knowledge sharing within and across firms, this study infer that knowledge sharing in a workplace can be influenced by the organizational, individuallevel and technological factors. This study proposes a conceptual model of knowledge sharing within a broad KM framework as an indispensable tool for SMEs internationalization. The model was assessed by using data gathered from employees and managers of twenty-five (25) different SMEs in Norway. The proposed model of knowledge sharing argues that knowledge sharing is influenced by the organizational, individual-level and technological factors. The study also found mediated effect between the organizational factors as well as between the technological factor and knowledge sharing behavior (i.e., being mediated by the individual-level factors). The test results were statistically significant. The organizational factors were acknowledged to have a highly significant role in ensuring that knowledge sharing takes place in the workplace, although the remaining factors play a critical in the knowledge sharing process. For instance, the technological factor may effectively help in creating, storing and distributing explicit knowledge in an accessible and expeditious manner. The implications of the empirical findings are also provided in this study.",
"title": ""
},
{
"docid": "bcf0156fdc95f431c550e0554cddbcbc",
"text": "This paper deals with incremental classification and its particular application to invoice classification. An improved version of an already existant incremental neural network called IGNG (incremental growing neural gas) is used for this purpose. This neural network tries to cover the space of data by adding or deleting neurons as data is fed to the system. The improved version of the IGNG, called I2GNG used local thresholds in order to create or delete neurons. Applied on invoice documents represented with graphs, I2GNG shows a recognition rate of 97.63%.",
"title": ""
},
{
"docid": "363381fbd6a5a19242a432ca80051bba",
"text": "Multimedia data on social websites contain rich semantics and are often accompanied with user-defined tags. To enhance Web media semantic concept retrieval, the fusion of tag-based and content-based models can be used, though it is very challenging. In this article, a novel semantic concept retrieval framework that incorporates tag removal and model fusion is proposed to tackle such a challenge. Tags with useful information can facilitate media search, but they are often imprecise, which makes it important to apply noisy tag removal (by deleting uncorrelated tags) to improve the performance of semantic concept retrieval. Therefore, a multiple correspondence analysis (MCA)-based tag removal algorithm is proposed, which utilizes MCA's ability to capture the relationships among nominal features and identify representative and discriminative tags holding strong correlations with the target semantic concepts. To further improve the retrieval performance, a novel model fusion method is also proposed to combine ranking scores from both tag-based and content-based models, where the adjustment of ranking scores, the reliability of models, and the correlations between the intervals divided on the ranking scores and the semantic concepts are all considered. Comparative results with extensive experiments on the NUS-WIDE-LITE as well as the NUS-WIDE-270K benchmark datasets with 81 semantic concepts show that the proposed framework outperforms baseline results and the other comparison methods with each component being evaluated separately.",
"title": ""
},
{
"docid": "b59e90e5d1fa3f58014dedeea9d5b6e4",
"text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.",
"title": ""
},
{
"docid": "83cc283967bf6bc7f04729a5e08660e2",
"text": "Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8\"? 5'9\"? 5'10\"? 5'11\"? 6'? 6'2\"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and \"nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given",
"title": ""
},
{
"docid": "f1efe8868f19ccbb4cf2ab5c08961cdb",
"text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.",
"title": ""
},
{
"docid": "d88b845296811f881e46ed04e6caca31",
"text": "OBJECTIVES\nThis study evaluated how patient characteristics and duplex ultrasound findings influence management decisions of physicians with specific expertise in the field of chronic venous disease.\n\n\nMETHODS\nWorldwide, 346 physicians with a known interest and experience in phlebology were invited to participate in an online survey about management strategies in patients with great saphenous vein (GSV) reflux and refluxing tributaries. The survey included two basic vignettes representing a 47 year old healthy male with GSV reflux above the knee and a 27 year old healthy female with a short segment refluxing GSV (CEAP classification C2sEpAs2,5Pr in both cases). Participants could choose one or more treatment options. Subsequently, the basic vignettes were modified according to different patient characteristics (e.g. older age, morbid obesity, anticoagulant treatment, peripheral arterial disease), clinical class (C4, C6), and duplex ultrasound findings (e.g. competent terminal valve, larger or smaller GSV diameter, presence of focal dilatation). The authors recorded the distribution of chosen management strategies; adjustment of strategies according to characteristics; and follow up strategies.\n\n\nRESULTS\nA total of 211 physicians (68% surgeons, 12% dermatologists, 12% angiologists, and 8% phlebologists) from 36 different countries completed the survey. In the basic case vignettes 1 and 2, respectively, 55% and 40% of participants proposed to perform endovenous thermal ablation, either with or without concomitant phlebectomies (p < .001). Looking at the modified case vignettes, between 20% and 64% of participants proposed to adapt their management strategy, opting for either a more or a less invasive treatment, depending on the modification introduced. The distribution of chosen management strategies changed significantly for all modified vignettes (p < .05).\n\n\nCONCLUSIONS\nThis study illustrates the worldwide variety in management preferences for treating patients with varicose veins (C2-C6). In clinical practice, patient related and duplex ultrasound related factors clearly influence therapeutic options.",
"title": ""
},
{
"docid": "c1c044c7ede9cfde42878ea162d1f457",
"text": "When designing the rotor of a radial flux permanent magnet synchronous machine (PMSM), one key part is the sizing of the permanent magnets (PM) in the rotor to produce the required air-gap flux density. This paper focuses on the effect that different coefficients have on the air-gap flux density of four radial flux PMSM rotor topologies. A direct connection is shown between magnet volume and flux producing magnet area with the aid of static finite element model simulations of the four rotor topologies. With this knowledge, the calculation of the flux producing magnet area can be done with ease once the minimum magnet volume has been determined. This technique can also be applied in the design of line-start PMSM rotors where the rotor area is limited.",
"title": ""
},
{
"docid": "082f19bb94536f61a7c9e4edd9a9c829",
"text": "Phytoplankton abundance and composition and the cyanotoxin, microcystin, were examined relative to environmental parameters in western Lake Erie during late-summer (2003–2005). Spatially explicit distributions of phytoplankton occurred on an annual basis, with the greatest chlorophyll (Chl) a concentrations occurring in waters impacted by Maumee River inflows and in Sandusky Bay. Chlorophytes, bacillariophytes, and cyanobacteria contributed the majority of phylogenetic-group Chl a basin-wide in 2003, 2004, and 2005, respectively. Water clarity, pH, and specific conductance delineated patterns of group Chl a, signifying that water mass movements and mixing were primary determinants of phytoplankton accumulations and distributions. Water temperature, irradiance, and phosphorus availability delineated patterns of cyanobacterial biovolumes, suggesting that biotic processes (most likely, resource-based competition) controlled cyanobacterial abundance and composition. Intracellular microcystin concentrations corresponded to Microcystis abundance and environmental parameters indicative of conditions coincident with biomass accumulations. It appears that environmental parameters regulate microcystin indirectly, via control of cyanobacterial abundance and distribution.",
"title": ""
},
{
"docid": "43269c32b765b0f5d5d0772e0b1c5906",
"text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.",
"title": ""
},
{
"docid": "966f5ff1ef057f2d19d10865eef35728",
"text": "Recognition of characters in natural images is a challenging task due to the complex background, variations of text size and perspective distortion, etc. Traditional optical character recognition (OCR) engine cannot perform well on those unconstrained text images. A novel technique is proposed in this paper that makes use of convolutional cooccurrence histogram of oriented gradient (ConvCoHOG), which is more robust and discriminative than both the histogram of oriented gradient (HOG) and the co-occurrence histogram of oriented gradients (CoHOG). In the proposed technique, a more informative feature is constructed by exhaustively extracting features from every possible image patches within character images. Experiments on two public datasets including the ICDAr 2003 Robust Reading character dataset and the Street View Text (SVT) dataset, show that our proposed character recognition technique obtains superior performance compared with state-of-the-art techniques.",
"title": ""
},
{
"docid": "4768b338044e38949f50c5856bc1a07c",
"text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.",
"title": ""
},
{
"docid": "3b074e9574838169881e212cb5899d27",
"text": "The introduction of inexpensive 3D data acquisition devices has promisingly facilitated the wide availability and popularity of 3D point cloud, which attracts more attention on the effective extraction of novel 3D point cloud descriptors for accurate and efficient of 3D computer vision tasks. However, how to develop discriminative and robust feature descriptors from various point clouds remains a challenging task. This paper comprehensively investigates the existing approaches for extracting 3D point cloud descriptors which are categorized into three major classes: local-based descriptor, global-based descriptor and hybrid-based descriptor. Furthermore, experiments are carried out to present a thorough evaluation of performance of several state-of-the-art 3D point cloud descriptors used widely in practice in terms of descriptiveness, robustness and efficiency.",
"title": ""
},
{
"docid": "261ab16552e2f7cfcdf89971a066a812",
"text": "The paper demonstrates that in a multi-voltage level (medium and low-voltages) distribution system the incident energy can be reduced to 8 cal/cm2, or even less, (Hazard risk category, HRC 2), so that a PPE outfit of greater than 2 is not required. This is achieved with the current state of the art equipment and protective devices. It is recognized that in the existing distribution systems, not specifically designed with this objective, it may not be possible to reduce arc flash hazard to this low level, unless major changes in the system design and protection are made. A typical industrial distribution system is analyzed, and tables and time coordination plots are provided to support the analysis. Unit protection schemes and practical guidelines for arc flash reduction are provided. The methodology of IEEE 1584 [1] is used for the analyses.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "65af21566422d9f0a11f07d43d7ead13",
"text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.",
"title": ""
}
] |
scidocsrr
|
3743db53598a7771508150db2f4a34a1
|
Towards a Robust Solution to People Counting
|
[
{
"docid": "0a75a45141a7f870bba32bed890da782",
"text": "Surveillance systems for public security are going beyond the conventional CCTV. A new generation of systems relies on image processing and computer vision techniques, deliver more ready-to-use information, and provide assistance for early detection of unusual events. Crowd density is a useful source of information because unusual crowdedness is often related to unusual events. Previous works on crowd density estimation either ignore perspective distortion or perform the correction based on incorrect formulation. Also there is no investigation on whether the geometric correction derived for the ground plane can be applied to human objects standing upright to the plane. This paper derives the relation for geometric correction for the ground plane and proves formally that it can be directly applied to all the foreground pixels. We also propose a very efficient implementation because it is important for a real-time application. Finally a time-adaptive criterion for unusual crowdedness detection is described.",
"title": ""
},
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] |
[
{
"docid": "34bd9a54a1aeaf82f7c4b27047cb2f49",
"text": "Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are unable to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and supply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.",
"title": ""
},
{
"docid": "ea50fcb63d7eeb37a3acd47ce4a7a572",
"text": "Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.",
"title": ""
},
{
"docid": "4ad535f3b4f1afba4497a4026236424e",
"text": "We study the problem of noninvasively estimating Blood Pressure (BP) without using a cuff, which is attractive for continuous monitoring of BP over Body Area Networks. It has been shown that the Pulse Arrival Time (PAT) measured as the delay between the ECG peak and a point in the finger PPG waveform can be used to estimate systolic and diastolic BP. Our aim is to evaluate the performance of such a method using the available MIMIC database, while at the same time improve the performance of existing techniques. We propose an algorithm to estimate BP from a combination of PAT and heart rate, showing improvement over PAT alone. We also show how the method achieves recalibration using an RLS adaptive algorithm. Finally, we address the use case of ECG and PPG sensors wirelessly communicating to an aggregator and study the effect of skew and jitter on BP estimation.",
"title": ""
},
{
"docid": "766b726231f9d9540deb40183b49a655",
"text": "This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.",
"title": ""
},
{
"docid": "4be9ae4bc6fb01e78d550bedf199d0b0",
"text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.",
"title": ""
},
{
"docid": "7d82c8d8fae92b9ac2a3d63f74e0b973",
"text": "The security of sensitive data and the safety of control signal are two core issues in industrial control system (ICS). However, the prevalence of USB storage devices brings a great challenge on protecting ICS in those respects. Unfortunately, there is currently no solution especially for ICS to provide a complete defense against data transmission between untrusted USB storage devices and critical equipment without forbidding normal USB device function. This paper proposes a trust management scheme of USB storage devices for ICS (TMSUI). By fully considering the background of application scenarios, TMSUI is designed based on security chip to achieve authoring a certain USB storage device to only access some exact protected terminals in ICS for a particular period of time. The issues about digital forensics and revocation of authorization are discussed. The prototype system is finally implemented and the evaluation on it indicates that TMSUI effectively meets the security goals with high compatibility and good performance.",
"title": ""
},
{
"docid": "c063474634eb427cf0215b4500182f8c",
"text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.",
"title": ""
},
{
"docid": "cfb06477edaa39f53b1b892cdfc1621a",
"text": "This paper presents ray casting as the methodological basis for a CAD/CAM solid modeling system. Solid objects are modeled by combining primitive solids, such as blocks and cylinders, using the set operators union, intersection, and difference. To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding linesurface intersection points. So surfaces such as planes, quad&, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. New methods are presented, accompanied by sample pictures and CPU times, to meet the challenge.",
"title": ""
},
{
"docid": "3d332b3ae4487a7272ae1e2204965f98",
"text": "Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.",
"title": ""
},
{
"docid": "eafe4aa1aada03bad956d8bed16546dd",
"text": "The increasing prevalence of male-to-female (MtF) transsexualism in Western countries is largely due to the growing number of MtF transsexuals who have a history of sexual arousal with cross-dressing or cross-gender fantasy. Ray Blanchard proposed that these transsexuals have a paraphilia he called autogynephilia, which is the propensity to be sexually aroused by the thought or image of oneself as female. Autogynephilia defines a transsexual typology and provides a theory of transsexual motivation, in that Blanchard proposed that MtF transsexuals are either sexually attracted exclusively to men (homosexual) or are sexually attracted primarily to the thought or image of themselves as female (autogynephilic), and that autogynephilic transsexuals seek sex reassignment to actualize their autogynephilic desires. Despite growing professional acceptance, Blanchard's formulation is rejected by some MtF transsexuals as inconsistent with their experience. This rejection, I argue, results largely from the misconception that autogynephilia is a purely erotic phenomenon. Autogynephilia can more accurately be conceptualized as a type of sexual orientation and as a variety of romantic love, involving both erotic and affectional or attachment-based elements. This broader conception of autogynephilia addresses many of the objections to Blanchard's theory and is consistent with a variety of clinical observations concerning autogynephilic MtF transsexualism.",
"title": ""
},
{
"docid": "5906d20bea1c95399395d045f84f11c9",
"text": "Constructive interference (CI) enables concurrent transmissions to interfere non-destructively, so as to enhance network concurrency. In this paper, we propose deliberate synchronized constructive interference (Disco), which ensures concurrent transmissions of an identical packet to synchronize more precisely than traditional CI. Disco envisions concurrent transmissions to positively interfere at the receiver, and potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. We also theoretically introduce a sufficient condition to construct Disco with IEEE 802.15.4 radio for the first time. Moreover, we propose Triggercast, a distributed middleware service, and show it is feasible to generate Disco on real sensor network platforms like TMote Sky. To synchronize transmissions of multiple senders at the chip level, Triggercast effectively compensates propagation and radio processing delays, and has 95th percentile synchronization errors of at most 250 ns. Triggercast also intelligently decides which co-senders to participate in simultaneous transmissions, and aligns their transmission time to maximize the overall link Packet Reception Ratio (PRR), under the condition of maximal system robustness. Extensive experiments in real testbeds demonstrate that Triggercast significantly improves PRR from 5 to 70 percent with seven concurrent senders. We also demonstrate that Triggercast provides 1.3χ PRR performance gains in average, when it is integrated with existing data forwarding protocols.",
"title": ""
},
{
"docid": "7a6d32d50e3b1be70889fc85ffdcac45",
"text": "Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.",
"title": ""
},
{
"docid": "2c5f0763b6c4888babc04af50bb89aaf",
"text": "A 1.8-V 14-b 12-MS/s pseudo-differential pipeline analog-to-digital converter (ADC) using a passive capacitor error-averaging technique and a nested CMOS gain-boosting technique is described. The converter is optimized for low-voltage low-power applications by applying an optimum stage-scaling algorithm at the architectural level and an opamp and comparator sharing technique at the circuit level. Prototyped in a 0.18-/spl mu/m 6M-1P CMOS process, this converter achieves a peak signal-to-noise plus distortion ratio (SNDR) of 75.5 dB and a 103-dB spurious-free dynamic range (SFDR) without trimming, calibration, or dithering. With a 1-MHz analog input, the maximum differential nonlinearity is 0.47 LSB and the maximum integral nonlinearity is 0.54 LSB. The large analog bandwidth of the front-end sample-and-hold circuit is achieved using bootstrapped thin-oxide transistors as switches, resulting in an SFDR of 97 dB when a 40-MHz full-scale input is digitized. The ADC occupies an active area of 10 mm/sup 2/ and dissipates 98 mW.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "047c36e2650b8abde75cccaeb0368c88",
"text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.",
"title": ""
},
{
"docid": "3621dd85dc4ba3007cfa8ec1017b4e96",
"text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.",
"title": ""
},
{
"docid": "29cbdeb95a221820a6425e1249763078",
"text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.",
"title": ""
},
{
"docid": "5744f6f5d6b2f0f5f150ec939d1f8c74",
"text": "We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning, and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We implement a constrained tracker and compute the expected change for putative annotations with efficient dynamic programming algorithms. We demonstrate our framework on four datasets, including two benchmark datasets constructed with key frame annotations obtained by Amazon Mechanical Turk. Our results indicate that we could obtain equivalent labels for a small fraction of the original cost.",
"title": ""
},
{
"docid": "db8d146ad8e62fd7a558703ef20a6330",
"text": "In this paper, we focus on the problem of completion of multidimensional arrays (also referred to as tensors), in particular three-dimensional (3-D) arrays, from limited sampling. Our approach is based on a recently proposed tensor algebraic framework where 3-D tensors are treated as linear operators over the set of 2-D tensors. In this framework, one can obtain a factorization for 3-D data, referred to as the tensor singular value decomposition (t-SVD), which is similar to the SVD for matrices. t-SVD results in a notion of rank referred to as the tubal-rank. Using this approach we consider the problem of sampling and recovery of 3-D arrays with low tubal-rank. We show that by solving a convex optimization problem, which minimizes a convex surrogate to the tubal-rank, one can guarantee exact recovery with high probability as long as number of samples is of the order <inline-formula><tex-math notation=\"LaTeX\">$O(rnk \\log (nk))$ </tex-math></inline-formula>, given a tensor of size <inline-formula><tex-math notation=\"LaTeX\">$n\\times n\\times k$ </tex-math></inline-formula> with tubal-rank <inline-formula><tex-math notation=\"LaTeX\">$r$</tex-math></inline-formula> . The conditions under which this result holds are similar to the incoherence conditions for low-rank matrix completion under random sampling. The difference is that we define incoherence under the algebraic setup of t-SVD, which is different from the standard matrix incoherence conditions. We also compare the numerical performance of the proposed algorithm with some state-of-the-art approaches on real-world datasets.",
"title": ""
},
{
"docid": "e5cd0bdffd94215aa19a5fc29a1b6753",
"text": "Anhedonia is a core symptom of major depressive disorder (MDD), long thought to be associated with reduced dopaminergic function. However, most antidepressants do not act directly on the dopamine system and all antidepressants have a delayed full therapeutic effect. Recently, it has been proposed that antidepressants fail to alter dopamine function in antidepressant unresponsive MDD. There is compelling evidence that dopamine neurons code a specific phasic (short duration) reward-learning signal, described by temporal difference (TD) theory. There is no current evidence for other neurons coding a TD reward-learning signal, although such evidence may be found in time. The neuronal substrates of the TD signal were not explored in this study. Phasic signals are believed to have quite different properties to tonic (long duration) signals. No studies have investigated phasic reward-learning signals in MDD. Therefore, adults with MDD receiving long-term antidepressant medication, and comparison controls both unmedicated and acutely medicated with the antidepressant citalopram, were scanned using fMRI during a reward-learning task. Three hypotheses were tested: first, patients with MDD have blunted TD reward-learning signals; second, controls given an antidepressant acutely have blunted TD reward-learning signals; third, the extent of alteration in TD signals in major depression correlates with illness severity ratings. The results supported the hypotheses. Patients with MDD had significantly reduced reward-learning signals in many non-brainstem regions: ventral striatum (VS), rostral and dorsal anterior cingulate, retrosplenial cortex (RC), midbrain and hippocampus. However, the TD signal was increased in the brainstem of patients. As predicted, acute antidepressant administration to controls was associated with a blunted TD signal, and the brainstem TD signal was not increased by acute citalopram administration. In a number of regions, the magnitude of the abnormal signals in MDD correlated with illness severity ratings. The findings highlight the importance of phasic reward-learning signals, and are consistent with the hypothesis that antidepressants fail to normalize reward-learning function in antidepressant-unresponsive MDD. Whilst there is evidence that some antidepressants acutely suppress dopamine function, the long-term action of virtually all antidepressants is enhanced dopamine agonist responsiveness. This distinction might help to elucidate the delayed action of antidepressants. Finally, analogous to recent work in schizophrenia, the finding of abnormal phasic reward-learning signals in MDD implies that an integrated understanding of symptoms and treatment mechanisms is possible, spanning physiology, phenomenology and pharmacology.",
"title": ""
}
] |
scidocsrr
|
f7aa9fe40d401b8e23e6d58dde8991f4
|
Music Similarity Measures: What's the use?
|
[
{
"docid": "59b928fab5d53519a0a020b7461690cf",
"text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.",
"title": ""
}
] |
[
{
"docid": "6d70ac4457983c7df8896a9d31728015",
"text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications",
"title": ""
},
{
"docid": "c0ddc4b83145a1ee7b252d65066b8969",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.",
"title": ""
},
{
"docid": "2793e8eb1410b2379a8a416f0560df0a",
"text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.",
"title": ""
},
{
"docid": "c43b77b56a6e2cb16a6b85815449529d",
"text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.",
"title": ""
},
{
"docid": "4a5131ec6e40545765e400d738441376",
"text": "Experiments have been performed to investigate the operating modes of a generator of 2/spl times/500-ps bipolar high-voltage, nanosecond pulses with the double amplitude (270 kV) close to that of the charge pulse of the RADAN-303 nanosecond driver. The generator contains an additional peaker shortening the risetime of the starting pulse and a pulse-forming line with two untriggered gas gaps operating with a total jitter of 200 ps.",
"title": ""
},
{
"docid": "ae7117416b4a07d2b15668c2c8ac46e3",
"text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.",
"title": ""
},
{
"docid": "d95ae6900ae353fa0ed32167e0c23f16",
"text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.",
"title": ""
},
{
"docid": "60d90ae1407c86559af63f20536202dc",
"text": "TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol.",
"title": ""
},
{
"docid": "136deaa8656bdb1c2491de4effd09838",
"text": "The fabrication technology advancements lead to place more logic on a silicon die which makes verification more challenging task than ever. The large number of resources is required because more than 70% of the design cycle is used for verification. Universal Verification Methodology was developed to provide a well structured and reusable verification environment which does not interfere with the device under test (DUT). This paper contrasts the reusability of I2C using UVM and introduces how the verification environment is constructed and test cases are implemented for this protocol.",
"title": ""
},
{
"docid": "8fbb53199fab6383b8dd01347d62cf86",
"text": "In this paper, we analyze ring oscillator (RO) based physical unclonable function (PUF) on FPGAs. We show that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and propose a compensation method to mitigate it. Moreover, a configurable ring oscillator (CRO) technique is proposed to reduce noise in PUF responses. Our compensation method could improve the uniqueness of the PUF by an amount as high as 18%. The CRO technique could produce nearly 100% error-free PUF outputs over varying environmental conditions without post-processing while consuming minimum area.",
"title": ""
},
{
"docid": "26b13a3c03014fc910ed973c264e4c9d",
"text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.",
"title": ""
},
{
"docid": "2b7ac1941127e1d47401d67e6d7856de",
"text": "Alert correlation is an important technique for managing large the volume of intrusion alerts that are raised by heterogenous Intrusion Detection Systems (IDSs). The recent trend of research in this area is towards extracting attack strategies from raw intrusion alerts. It is generally believed that pure intrusion detection no longer can satisfy the security needs of organizations. Intrusion response and prevention are now becoming crucially important for protecting the network and minimizing damage. Knowing the real security situation of a network and the strategies used by the attackers enables network administrators to launches appropriate response to stop attacks and prevent them from escalating. This is also the primary goal of using alert correlation technique. However, most of the current alert correlation techniques only focus on clustering inter-connected alerts into different groups without further analyzing the strategies of the attackers. Some techniques for extracting attack strategies have been proposed in recent years, but they normally require defining a larger number of rules. This paper focuses on developing a new alert correlation technique that can help to automatically extract attack strategies from a large volume of intrusion alerts, without specific prior knowledge about these alerts. The proposed approach is based on two different neural network approaches, namely, Multilayer Perceptron (MLP) and Support Vector Machine (SVM). The probabilistic output of these two methods is used to determine with which previous alerts this current alert should be correlated. This suggests the causal relationship of two alerts, which is helpful for constructing attack scenarios. One of the distinguishing feature of the proposed technique is that an Alert Correlation Matrix (ACM) is used to store correlation strengthes of any two types of alerts. ACM is updated in the training process, and the information (correlation strength) is then used for extracting high level attack strategies.",
"title": ""
},
{
"docid": "b8cec6cfbc55c9fd6a7d5ed951bcf4eb",
"text": "Increasingly large amount of multidimensional data are being generated on a daily basis in many applications. This leads to a strong demand for learning algorithms to extract useful information from these massive data. This paper surveys the field of multilinear subspace learning (MSL) for dimensionality reduction of multidimensional data directly from their tensorial representations. It discusses the central issues of MSL, including establishing the foundations of the field via multilinear projections, formulating a unifying MSL framework for systematic treatment of the problem, examining the algorithmic aspects of typical MSL solutions, and categorizing both unsupervised and supervised MSL algorithms into taxonomies. Lastly, the paper summarizes a wide range of MSL applications and concludes with perspectives on future research directions.",
"title": ""
},
{
"docid": "b25b7100c035ad2953fb43087ede1625",
"text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.",
"title": ""
},
{
"docid": "529ca36809a7052b9495279aa1081fcc",
"text": "To effectively control complex dynamical systems, accurate nonlinear models are typically needed. However, these models are not always known. In this paper, we present a data-driven approach based on Gaussian processes that learns models of quadrotors operating in partially unknown environments. What makes this challenging is that if the learning process is not carefully controlled, the system will go unstable, i.e., the quadcopter will crash. To this end, barrier certificates are employed for safe learning. The barrier certificates establish a non-conservative forward invariant safe region, in which high probability safety guarantees are provided based on the statistics of the Gaussian Process. A learning controller is designed to efficiently explore those uncertain states and expand the barrier certified safe region based on an adaptive sampling scheme. Simulation results are provided to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "613b014ea02019a78be488a302ff4794",
"text": "In this study, the robustness of approaches to the automatic classification of emotions in speech is addressed. Among the many types of emotions that exist, two groups of emotions are considered, adult-to-adult acted vocal expressions of common types of emotions like happiness, sadness, and anger and adult-to-infant vocal expressions of affective intents also known as ‘‘motherese’’. Specifically, we estimate the generalization capability of two feature extraction approaches, the approach developed for Sony’s robotic dog AIBO (AIBO) and the segment-based approach (SBA) of [Shami, M., Kamel, M., 2005. Segment-based approach to the recognition of emotions in speech. In: IEEE Conf. on Multimedia and Expo (ICME05), Amsterdam, The Netherlands]. Three machine learning approaches are considered, K-nearest neighbors (KNN), Support vector machines (SVM) and Ada-boosted decision trees and four emotional speech databases are employed, Kismet, BabyEars, Danish, and Berlin databases. Single corpus experiments show that the considered feature extraction approaches AIBO and SBA are competitive on the four databases considered and that their performance is comparable with previously published results on the same databases. The best choice of machine learning algorithm seems to depend on the feature extraction approach considered. Multi-corpus experiments are performed with the Kismet–BabyEars and the Danish–Berlin database pairs that contain parallel emotional classes. Automatic clustering of the emotional classes in the database pairs shows that the patterns behind the emotions in the Kismet–BabyEars pair are less database dependent than the patterns in the Danish–Berlin pair. In off-corpus testing the classifier is trained on one database of a pair and tested on the other. This provides little improvement over baseline classification. In integrated corpus testing, however, the classifier is machine learned on the merged databases and this gives promisingly robust classification results, which suggest that emotional corpora with parallel emotion classes recorded under different conditions can be used to construct a single classifier capable of distinguishing the emotions in the merged corpora. Such a classifier is more robust than a classifier learned on a single corpus as it can recognize more varied expressions of the same emotional classes. These findings suggest that the existing approaches for the classification of emotions in speech are efficient enough to handle larger amounts of training data without any reduction in classification accuracy. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2549ed70fd2e06c749bf00193dad1f4d",
"text": "Phenylketonuria (PKU) is an inborn error of metabolism caused by deficiency of the hepatic enzyme phenylalanine hydroxylase (PAH) which leads to high blood phenylalanine (Phe) levels and consequent damage of the developing brain with severe mental retardation if left untreated in early infancy. The current dietary Phe restriction treatment has certain clinical limitations. To explore a long-term nondietary restriction treatment, a somatic gene transfer approach in a PKU mouse model (C57Bl/6-Pahenu2) was employed to examine its preclinical feasibility. A recombinant adeno-associated virus (rAAV) vector containing the murine Pah-cDNA was generated, pseudotyped with capsids from AAV serotype 8, and delivered into the liver of PKU mice via single intraportal or tail vein injections. The blood Phe concentrations decreased to normal levels (⩽100 μM or 1.7 mg/dl) 2 weeks after vector application, independent of the sex of the PKU animals and the route of application. In particular, the therapeutic long-term correction in females was also dramatic, which had previously been shown to be difficult to achieve. Therapeutic ranges of Phe were accompanied by the phenotypic reversion from brown to black hair. In treated mice, PAH enzyme activity in whole liver extracts reversed to normal and neither hepatic toxicity nor immunogenicity was observed. In contrast, a lentiviral vector expressing the murine Pah-cDNA, delivered via intraportal vein injection into PKU mice, did not result in therapeutic levels of blood Phe. This study demonstrates the complete correction of hyperphenylalaninemia in both males and females with a rAAV serotype 8 vector. More importantly, the feasibility of a single intravenous injection may pave the way to develop a clinical gene therapy procedure for PKU patients.",
"title": ""
},
{
"docid": "87f05972a93b2b432d0dad6d55e97502",
"text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.",
"title": ""
},
{
"docid": "0d1da055e444a90ec298a2926de9fe7b",
"text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.",
"title": ""
},
{
"docid": "3115c716a065334dc0cdec9e33e24149",
"text": "With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.",
"title": ""
}
] |
scidocsrr
|
e8caa6cb31ec81ff9786a1d29d470272
|
Research Note - Privacy Concerns and Privacy-Protective Behavior in Synchronous Online Social Interactions
|
[
{
"docid": "10b16932bb8c1d85f759c181da6e5407",
"text": "Many explanations of both proand anti-social behaviors in computer-mediated communication (CMC) appear to hinge on changes in individual self-awareness. In spite of this, little research has been devoted to understanding the effects of self-awareness in CMC. To fill this void, this study examined the effects of individuals public and private self-awareness in anonymous, time-restricted, and synchronous CMC. Two experiments were conducted. A pilot experiment tested and confirmed the effectiveness of using a Web camera combined with an alleged online audience to enhance users public self-awareness. In the main study users private and public self-awareness were manipulated in a crossed 2 · 2 factorial design. Pairs of participants completed a Desert Survival Problem via a synchronous, text-only chat program. After the task, they evaluated each other on intimacy, task/social orientation, formality, politeness, attraction, and group identification. The results suggest that a lack of private and public self-awareness does not automatically lead to impersonal tendencies in CMC as deindividuation perspectives of CMC would argue. Moreover, participants in this study were able to form favorable impressions in a completely anonymous environment based on brief interaction, which lends strong support to the idealization proposed by hyperpersonal theory. Findings are used to modify and extend current theoretical perspectives on CMC. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a27ebb8015c950992084365d539f565e",
"text": "The art of tiling originated very early in the history of civilization. Almost every known human society has made use of tilings in some form or another. In particular, tilings using only regular polygons have great visual appeal. Decorated regular tilings with continuous and symmetrical patterns were widely used in decoration field, such as mosaics, pavements, and brick walls. In science, these tilings provide inspiration for synthetic organic chemistry. Building on previous CG&A “Beautiful Math” articles, the authors propose an invariant mapping method to create colorful patterns on Archimedean tilings (1-uniform tilings). The resulting patterns simultaneously have global crystallographic symmetry and local cyclic or dihedral symmetry.",
"title": ""
},
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "14e8006ae1fc0d97e737ff2a5a4d98dd",
"text": "Building dialogue systems that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialogue. Our model represents the first attempt to integrating a large commonsense knowledge base into end-toend conversational models. In the retrieval-based scenario, we propose a model to jointly take into account message content and related commonsense for selecting an appropriate response. Our experiments suggest that the knowledgeaugmented models are superior to their knowledge-free counterparts.",
"title": ""
},
{
"docid": "a126d8183668cbf15cd8aec4cf49bb3f",
"text": "The present meta-analysis investigated the effectiveness of strategies derived from the process model of emotion regulation in modifying emotional outcomes as indexed by experiential, behavioral, and physiological measures. A systematic search of the literature identified 306 experimental comparisons of different emotion regulation (ER) strategies. ER instructions were coded according to a new taxonomy, and meta-analysis was used to evaluate the effectiveness of each strategy across studies. The findings revealed differences in effectiveness between ER processes: Attentional deployment had no effect on emotional outcomes (d(+) = 0.00), response modulation had a small effect (d(+) = 0.16), and cognitive change had a small-to-medium effect (d(+) = 0.36). There were also important within-process differences. We identified 7 types of attentional deployment, 4 types of cognitive change, and 4 types of response modulation, and these distinctions had a substantial influence on effectiveness. Whereas distraction was an effective way to regulate emotions (d(+) = 0.27), concentration was not (d(+) = -0.26). Similarly, suppressing the expression of emotion proved effective (d(+) = 0.32), but suppressing the experience of emotion or suppressing thoughts of the emotion-eliciting event did not (d(+) = -0.04 and -0.12, respectively). Finally, reappraising the emotional response proved less effective (d(+) = 0.23) than reappraising the emotional stimulus (d(+) = 0.36) or using perspective taking (d(+) = 0.45). The review also identified several moderators of strategy effectiveness including factors related to the (a) to-be-regulated emotion, (b) frequency of use and intended purpose of the ER strategy, (c) study design, and (d) study characteristics.",
"title": ""
},
{
"docid": "63de624a33f7c9362b477aabd9faac51",
"text": "24 GHz circularly polarized Doppler front-end with a single antenna is developed. The radar system is composed of 24 GHz circularly polarized Doppler radar module, signal conditioning block, DAQ unit, and signal processing program. 24 GHz Doppler radar receiver front-end IC which is comprised of 3-stage LNA, single-ended mixer, and Lange coupler is fabricated with commercial InGaP/GaAs HBT technology. To reduce the chip size and suppress self-mixing, single-ended mixer which uses Tx leakage as a LO signal of the mixer is used. The operation of the developed radar front-end is demonstrated by measuring human vital signal. Compact size and high sensitivity can be achieved at the same time with the circularly polarized Doppler radar with a single antenna.",
"title": ""
},
{
"docid": "b7bfebcf77d9486473b9fcd1f4b91e63",
"text": "One of the most widespread applications of the Global Positioning System (GPS) is vehicular navigation. Improving the navigation accuracy continues to be a focus of research, commonly answered by the use of additional sensors. A sensor commonly fused with GPS is the inertial measurement unit (IMU). Due to the fact that the requirements of commercial systems are low cost, small size, and power conservative, micro-electro mechanical sensors (MEMS) IMUs are used. They provide navigation capability even in the absence of GPS signals or in the presence of high multipath or jamming. This paper addresses a centralized filter construction whereby navigation solutions from multiple IMUs are fused together to improve accuracy in GPS degraded areas. The proposed filter is a collection of several single IMU block filters. Each block filter is a 21 state IMU filter. Because each block filter estimates position, velocity and attitude, the system can utilize relative updates between the IMUs. These relative updates provide a method of reducing the position drift in the absence of GPS observations. The proposed filter’s performance is analyzed as a function of the number of IMUs used and relative update type, using a data set consisting of GPS outages, urban canyons and residential open sky conditions. While the use of additional IMUs (including a single IMU) provides negligible improvement in open sky conditions (where GPS alone is sufficient), the use of two, three, four and five IMUs provided a horizontal position improvement of 25 %, 29 %, 32 %, and 34 %, respectively, when GPS observations are removed for 30 seconds. Similarly, the velocity RMS improved by 25 %, 31%, 33%, and 34% for two, three, four and five IMUs, respectively. Attitude estimation also improves significantly ranging from 30 % – 76 %. Results also indicate that the use of more IMUs provides the system with better multipath rejection and performance in urban canyons.",
"title": ""
},
{
"docid": "e849cdf1237792fdf7bcded91c35c398",
"text": "Purpose – System usage and user satisfaction are widely accepted and used as surrogate measures of IS success. Past studies attempted to explore the relationship between system usage and user satisfaction but findings are mixed, inconclusive and misleading. The main objective of this research is to better understand and explain the nature and strength of the relationship between system usage and user satisfaction by resolving the existing inconsistencies in the IS research and to validate this relationship empirically as defined in Delone and McLean’s IS success model. Design/methodology/approach – “Meta-analysis” as a research approach was adopted because of its suitability regarding the nature of the research and its capability of dealing with exploring relationships that may be obscured in other approaches to synthesize research findings. Meta-analysis findings contributed towards better explaining the relationship between system usage and user satisfaction, the main objectives of this research. Findings – This research examines critically the past findings and resolves the existing inconsistencies. The meta-analysis findings explain that there exists a significant positive relationship between “system usage” and “user satisfaction” (i.e. r 1⁄4 0:2555) although not very strong. This research empirically validates this relationship that has already been proposed by Delone and McLean in their IS success model. Provides a guide for future research to explore the mediating variables that might affect the relationship between system usage and user satisfaction. Originality/value – This research better explains the relationship between system usage and user satisfaction by resolving contradictory findings in the past research and contributes to the existing body of knowledge relating to IS success.",
"title": ""
},
{
"docid": "ff9e0e5c2bb42955d3d29db7809414a1",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "63405a3fc4815e869fc872bb96bb8a33",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "c347f649a6a183d7ee3f5abddfcbc2a1",
"text": "Concern has grown regarding possible harm to the social and psychological development of children and adolescents exposed to Internet pornography. Parents, academics and researchers have documented pornography from the supply side, assuming that its availability explains consumption satisfactorily. The current paper explored the user's dimension, probing whether pornography consumers differed from other Internet users, as well as the social characteristics of adolescent frequent pornography consumers. Data from a 2004 survey of a national representative sample of the adolescent population in Israel were used (n=998). Adolescent frequent users of the Internet for pornography were found to differ in many social characteristics from the group that used the Internet for information, social communication and entertainment. Weak ties to mainstream social institutions were characteristic of the former group but not of the latter. X-rated material consumers proved to be a distinct sub-group at risk of deviant behaviour.",
"title": ""
},
{
"docid": "5cb44c68cecb0618be14cd52182dc96e",
"text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.",
"title": ""
},
{
"docid": "527e70797ec7931687d17d26f1f64428",
"text": "We experimentally demonstrate the focusing of visible light with ultra-thin, planar metasurfaces made of concentrically perforated, 30-nm-thick gold films. The perforated nano-voids—Babinet-inverted (complementary) nano-antennas—create discrete phase shifts and form a desired wavefront of cross-polarized, scattered light. The signal-to-noise ratio in our complementary nano-antenna design is at least one order of magnitude higher than in previous metallic nano-antenna designs. We first study our proof-of-concept ‘metalens’ with extremely strong focusing ability: focusing at a distance of only 2.5 mm is achieved experimentally with a 4-mm-diameter lens for light at a wavelength of 676 nm. We then extend our work with one of these ‘metalenses’ and achieve a wavelength-controllable focal length. Optical characterization of the lens confirms that switching the incident wavelength from 676 to 476 nm changes the focal length from 7 to 10 mm, which opens up new opportunities for tuning and spatially separating light at different wavelengths within small, micrometer-scale areas. All the proposed designs can be embedded on-chip or at the end of an optical fiber. The designs also all work for two orthogonal, linear polarizations of incident light. Light: Science & Applications (2013) 2, e72; doi:10.1038/lsa.2013.28; published online 26 April 2013",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "3848dd7667a25e8e7f69ecc318324224",
"text": "This paper describes the CloudProtect middleware that empowers users to encrypt sensitive data stored within various cloud applications. However, most web applications require data in plaintext for implementing the various functionalities and in general, do not support encrypted data management. Therefore, CloudProtect strives to carry out the data transformations (encryption/decryption) in a manner that is transparent to the application, i.e., preserves all functionalities of the application, including those that require data to be in plaintext. Additionally, CloudProtect allows users flexibility in trading off performance for security in order to let them optimally balance their privacy needs and usage-experience.",
"title": ""
},
{
"docid": "af9137900cd3fe09d9bea87f38324b80",
"text": "The cognitive walkthrough is a technique for evaluating the design of a user interface, with speciaJ attention to how well the interface supports “exploratory learning,” i.e., first-time use without formal training. The evaluation can be performed by the system’s designers in the e,arly stages of design, before empirical user testing is possible. Early versions of the walkthrough method relied on a detailed series of questions, to be answered on paper or electronic forms. This tutorial presents a simpler method, founded in an understanding of the cognitive theory that describes a user’s interactions with a system. The tutorial refines the method on the basis of recent empirical and theoretical studies of exploratory learning with display-based interfaces. The strengths and limitations of the walkthrough method are considered, and it is placed into the context of a more complete design approach.",
"title": ""
},
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
},
{
"docid": "2dbc68492e54d61446dac7880db71fdd",
"text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.",
"title": ""
},
{
"docid": "da6771ebd128ce1dc58f2ab1d56b065f",
"text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
}
] |
scidocsrr
|
546a64b871f37f1b67c7731641cd8ce4
|
Assessment , Enhancement , and Verification Determinants of the Self-Evaluation Process
|
[
{
"docid": "0b88b9b165a74cc630a0cf033308d6c2",
"text": "It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion. There is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions. These ideas can account for a wide variety of research concerned with motivated reasoning.",
"title": ""
}
] |
[
{
"docid": "9775396477ccfde5abdd766588655539",
"text": "The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images with an intensity level inversely proportional to the depth of the objects observed) the system performs hand segmentation as well as a low-level extraction of potentially relevant features which are related to the morphological representation of the hand silhouette. Classification based on these features discriminates between a set of possible static hand postures which results, combined with the estimated motion pattern of the hand, in the recognition of dynamic hand gestures. The whole system works in real-time, allowing practical interaction between user and application.",
"title": ""
},
{
"docid": "f462de59dd8b45f7c7e27672125010d2",
"text": "Researchers have recently noted (14; 27) the potential of fast poisoning attacks against DNS servers, which allows attackers to easily manipulate records in open recursive DNS resolvers. A vendor-wide upgrade mitigated but did not eliminate this attack. Further, existing DNS protection systems, including bailiwick-checking (12) and IDS-style filtration, do not stop this type of DNS poisoning. We therefore propose Anax, a DNS protection system that detects poisoned records in cache. Our system can observe changes in cached DNS records, and applies machine learning to classify these updates as malicious or benign. We describe our classification features and machine learning model selection process while noting that the proposed approach is easily integrated into existing local network protection systems. To evaluate Anax, we studied cache changes in a geographically diverse set of 300,000 open recursive DNS servers (ORDNSs) over an eight month period. Using hand-verified data as ground truth, evaluation of Anax showed a very low false positive rate (0.6% of all new resource records) and a high detection",
"title": ""
},
{
"docid": "fb44e3c2624d92c9ed408ebd00bdb793",
"text": "A novel method for online data acquisition of cursive handwriting is described. A video camera is used to record the handwriting of a user. From the acquired sequence of images, the movement of the tip of the pen is reconstructed. A prototype of the system has been implemented and tested. In one series of tests, the performance of the system was visually assessed. In another series of experiments, the system was combined with an existing online handwriting recognizer. Good results have been obtained in both sets of experiments.",
"title": ""
},
{
"docid": "3a17d60c2eb1df3bf491be3297cffe79",
"text": "Received: 3 October 2009 Revised: 22 June 2011 Accepted: 3 July 2011 Abstract Studies claiming to use the Grounded theory methodology (GTM) have been quite prevalent in information systems (IS) literature. A cursory review of this literature reveals conflict in the understanding of GTM, with a variety of grounded theory approaches apparent. The purpose of this investigation was to establish what alternative grounded theory approaches have been employed in IS, and to what extent each has been used. In order to accomplish this goal, a comprehensive set of IS articles that claimed to have followed a grounded theory approach were reviewed. The articles chosen were those published in the widely acknowledged top eight IS-centric journals, since these journals most closely represent exemplar IS research. Articles for the period 1985-2008 were examined. The analysis revealed four main grounded theory approaches in use, namely (1) the classic grounded theory approach, (2) the evolved grounded theory approach, (3) the use of the grounded theory approach as part of a mixed methodology, and (4) the application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common approach in IS research. The classic approach was the least often employed, with many studies opting for an evolved or mixed method approach. These and other findings are discussed and implications drawn. European Journal of Information Systems (2013) 22, 119–129. doi:10.1057/ejis.2011.35; published online 30 August 2011",
"title": ""
},
{
"docid": "98c3588648676eea3bb78a43aef92af4",
"text": "Data mining (DM) techniques are being increasingly used in many modern organizations to retrieve valuable knowledge structures from organizational databases, including data warehouses. An important knowledge structure that can result from data mining activities is the decision tree (DT) that is used for the classi3cation of future events. The induction of the decision tree is done using a supervised knowledge discovery process in which prior knowledge regarding classes in the database is used to guide the discovery. The generation of a DT is a relatively easy task but in order to select the most appropriate DT it is necessary for the DM project team to generate and analyze a signi3cant number of DTs based on multiple performance measures. We propose a multi-criteria decision analysis based process that would empower DM project teams to do thorough experimentation and analysis without being overwhelmed by the task of analyzing a signi3cant number of DTs would o7er a positive contribution to the DM process. We also o7er some new approaches for measuring some of the performance criteria. ? 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "31add593ce5597c24666d9662b3db89d",
"text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.",
"title": ""
},
{
"docid": "42d2f3c2cc7ed0c08dd8f450091e5a7a",
"text": "Analytical methods validation is an important regulatory requirement in pharmaceutical analysis. High-Performance Liquid Chromatography (HPLC) is commonly used as an analytical technique in developing and validating assay methods for drug products and drug substances. Method validation provides documented evidence, and a high degree of assurance, that an analytical method employed for a specific test, is suitable for its intended use. Over recent years, regulatory authorities have become increasingly aware of the necessity of ensuring that the data submitted to them in applications for marketing authorizations have been acquired using validated analytical methodology. The International Conference on Harmonization (ICH) has introduced guidelines for analytical methods validation. 1,2 The U.S. Food and Drug Administration (FDA) methods validation draft guidance document, 3-5 as well as United States Pharmacopoeia (USP) both refer to ICH guidelines. These draft guidances define regulatory and alternative analytical procedures and stability-indicating assays. The FDA has proposed adding section CFR 211.222 on analytical methods validation to the current Good Manufacturing Practice (cGMP) regulations. 7 This would require pharmaceutical manufacturers to establish and document the accuracy, sensitivity, specificity, reproducibility, and any other attribute (e.g., system suitability, stability of solutions) necessary to validate test methods. Regulatory analytical procedures are of two types: compendial and noncompendial. The noncompendial analytical procedures in the USP are those legally recognized as regulatory procedures under section 501(b) of the Federal Food, Drug and Cosmetic Act. When using USP analytical methods, the guidance recommends that information be provided for the following characteristics: specificity of the method, stability of the analytical sample solution, and intermediate precision. Compendial analytical methods may not be stability indicating, and this concern must be addressed when developing a drug product specification, because formulation based interference may not be considered in the monograph specifications. Additional analytical tests for impurities may be necessary to support the quality of the drug substance or drug product. Noncompendial analytical methods must be fully validated. The most widely applied validation characteristics are accuracy, precision (repeatability and intermediate precision), specificity, detection limit, quantitation limit, linearity, range, and stability of analytical solutions. The parameters that require validation and the approach adopted for each particular case are dependent on the type and applications of the method. Before undertaking the task of method validation, it is necessary that the analytical system itself is adequately designed, maintained, calibrated, and validated. 8 The first step in method validation is to prepare a protocol, preferably written with the instructions in a clear step-by-step format. This A Practical Approach to Validation of HPLC Methods Under Current Good Manufacturing Practices",
"title": ""
},
{
"docid": "2f5776d8ce9714dcee8d458b83072f74",
"text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.",
"title": ""
},
{
"docid": "981b4977ed3524545d9ae5016d45c8d6",
"text": "Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.",
"title": ""
},
{
"docid": "048cc782baeec3a7f46ef5ee7abf0219",
"text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.",
"title": ""
},
{
"docid": "f262e911b5254ad4d4419ed7114b8a4f",
"text": "User Satisfaction is one of the most extensively used dimensions for Information Systems (IS) success evaluation with a large body of literature and standardized instruments of User Satisfaction. Despite the extensive literature on User Satisfaction, there exist much controversy over the measures of User Satisfaction and the adequacy of User Satisfaction measures to gauge the level of success in complex, contemporary IS. Recent studies in IS have suggested treating User Satisfaction as an overarching construct of success, rather than a measure of success. Further perplexity is introduced over the alleged overlaps between User Satisfaction measures and the measures of IS success (e.g. system quality, information quality) suggested in the literature. The following study attempts to clarify the aforementioned confusions by gathering data from 310 Enterprise System users and analyzing 16 User Satisfaction instruments. The statistical analysis of the 310 responses and the content analysis of the 16 instruments suggest the appropriateness of treating User Satisfaction as an overarching measure of success rather a dimension of success.",
"title": ""
},
{
"docid": "3b32ade20fbdd7474ee10fc10d80d90a",
"text": "We report the modulation performance of micro-light-emitting diode arrays with peak emission ranging from 370 to 520 nm, and emitter diameters ranging from 14 to 84 μm. Bandwidths in excess of 400 MHz and error-free data transmission up to 1.1Gbit/s is shown. These devices are shown integrated with electronic drivers, allowing convenient control of individual array emitters. Transmission using such a device is shown at 512 Mbit/s.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "7dd3183ee59b800f3391f893d3578d64",
"text": "This paper reports on a bio-inspired angular accelerometer based on a two-mask microfluidic process using a PDMS mold. The sensor is inspired by the semicircular canals in mammalian vestibular systems and pairs a fluid-filled microtorus with a thermal detection principle based on thermal convection. With inherent linear acceleration insensitivity, the sensor features a sensitivity of 29.8μV/deg/s2=1.7mV/rad/s2, a dynamic range of 14,000deg/s2 and a detection limit of ~20deg/s2.",
"title": ""
},
{
"docid": "76a2bc6a8649ffe9111bfaa911572c9d",
"text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.",
"title": ""
},
{
"docid": "20be8363ae04659061a56a1c7d3ee4d5",
"text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours",
"title": ""
},
{
"docid": "1b92f2391b35ca30b86f6d5e8fae7ffe",
"text": "In this paper, two novel compact diplexers for satellite applications are presented. The first covers the Ku-band with two closely spaced channels (Ku-transmission band: 10.7–13 GHz and Ku-reception band: 13.75–14.8 GHz). The second is wider than the first (overall bandwidth up to 50%) achieves the suppression of the higher order modes, and covers the Ku/K-band with a reception channel between 17.2 and 18.5 GHz. Both diplexers are composed of two novel bandpass filters, joined together with an E-plane T-junction. The bandpass filters are designed by combining a low-pass filtering function (based on $\\lambda $ /4-step-shaped band-stop elements separated by very short waveguide sections) and a high-pass filtering structure (based on the waveguide propagation cutoff effect). The novel diplexers show a very compact footprint and very relaxed fabrication tolerances, and are especially attractive for wideband applications. A prototype Ku/K-band diplexer has also been fabricated by milling. Measurements show a very good agreement with simulations, thereby demonstrating the validity and manufacturing robustness of the proposed topology.",
"title": ""
},
{
"docid": "e17f9e8d57c98928ecccb27e3259f2a3",
"text": "A broadcast encryption scheme allows the sender to securely distribute data to a dynamically changing set of users over an insecure channel. It has numerous applications including pay-TV systems, distribution of copyrighted material, streaming audio/video and many others. One of the most challenging settings for this problem is that of stateless receivers, where each user is given a fixed set of keys which cannot be updated through the lifetime of the system. This setting was considered by Naor, Naor and Lotspiech [NNL01], who also present a very efficient “subset difference” (SD) method for solving this problem. The efficiency of this method (which also enjoys efficient traitor tracing mechanism and several other useful features) was recently improved by Halevi and Shamir [HS02], who called their refinement the “Layered SD” (LSD) method. Both of the above methods were originally designed to work in the centralized (symmetric key) setting, where only the trusted designer of the system can encrypt messages to users. On the other hand, in many applications it is desirable not to store the secret keys “on-line”, or to allow untrusted users to broadcast information. This leads to the question of building a public key broadcast encryption scheme for stateless receivers; in particular, of extending the elegant SD/LSD methods to the public key setting. Unfortunately, Naor et al. [NNL01] notice that the natural technique for doing so will result in an enormous public key and very large storage for every user. In fact, [NNL01] pose this question of reducing the public key size and user’s storage as the first open problem of their paper. We resolve this question in the affirmative, by demonstrating that an O(1) size public key can be achieved for both of SD/LSD methods, in addition to the same (small) user’s storage and ciphertext size as in the symmetric key setting. Courant Institute of Mathematical Sciences, New York University.",
"title": ""
},
{
"docid": "212a7c22310977f6b8ada29437668ed5",
"text": "Gait analysis and machine learning classification on healthy subjects in normal walking Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato To cite this article: Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato (2015): Gait analysis and machine learning classification on healthy subjects in normal walking, International Journal of Parallel, Emergent and Distributed Systems, DOI: 10.1080/17445760.2015.1044007 To link to this article: http://dx.doi.org/10.1080/17445760.2015.1044007",
"title": ""
}
] |
scidocsrr
|
12ad2563791538b48623e362b2392f05
|
Game-theoretic Analysis of Computation Offloading for Cloudlet-based Mobile Cloud Computing
|
[
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
},
{
"docid": "956799f28356850fda78a223a55169bf",
"text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for",
"title": ""
},
{
"docid": "aa18c10c90af93f38c8fca4eff2aab09",
"text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.",
"title": ""
}
] |
[
{
"docid": "6c14243c49a2d119d768685b59f9548b",
"text": "Over the past decade, researchers have shown significant advances in the area of radio frequency identification (RFID) and metamaterials. RFID is being applied to a wide spectrum of industries and metamaterial-based antennas are beginning to perform just as well as existing larger printed antennas. This paper presents two novel metamaterial-based antennas for passive ultra-high frequency (UHF) RFID tags. It is shown that by implementing omega-like elements and split-ring resonators into the design of an antenna for an UHF RFID tag, the overall size of the antenna can be significantly reduced to dimensions of less than 0.15λ0, while preserving the performance of the antenna.",
"title": ""
},
{
"docid": "03280447faf00c523b099d4bdbbfe7a5",
"text": "Ostrzenski’s G-pot anatomical structure discovery has been verified by the anatomy, histology, MRI in vivo, and electrovaginography in vivo studies. The objectives of this scientific-clinical investigation were to develop a new surgical reconstructive intervention (G-spotplasty); to determine the ability of G-spotplasty surgical implementation; to observe for potential complications; and to gather initial information on whether G-spotplasty improves female sexual activity, sexual behaviors, and sexual concerns. A case series study was designed and conducted with 5-year follow-up (October 2013 and October 2017). The rehearsal of new G-spotplasty was performed on fresh female cadavers. Three consecutive live women constituted this clinical study population, and they were subjected to the newly developed G-spotplasty procedure in October 2013. Preoperatively and postoperatively, a validated, self-completion instrument of Sexual Relationships and Activities Questionnaire (SRA-Q) was used to measure female sexual activity, sexual behaviors, and sexual concerns. Three out of twelve women met inclusion criteria and were incorporated into this study. All patients were subjected to G-spotplasty, completed 5-year follow-up, and returned completed SRA-Q in a sealed envelope. New G-spotplasty was successfully implemented without surgical difficulty and without complications. All patients reported re-establishing vaginal orgasms with different degrees of difficulties, observing return of anterior vaginal wall engorgement, and were very pleased with the outcome of G-spotplasty. The G-spotplasty is a simple surgical intervention, easy to implement, and improves sexual activities, sexual behaviors, and sexual concerns. The preliminary results are very promising and paved the way for additional clinical-scientific research. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "349a5c840daa587aa5d42c6e584e2103",
"text": "We propose a class of functional dependencies for graphs, referred to as GFDs. GFDs capture both attribute-value dependencies and topological structures of entities, and subsume conditional functional dependencies (CFDs) as a special case. We show that the satisfiability and implication problems for GFDs are coNP-complete and NP-complete, respectively, no worse than their CFD counterparts. We also show that the validation problem for GFDs is coNP-complete. Despite the intractability, we develop parallel scalable algorithms for catching violations of GFDs in large-scale graphs. Using real-life and synthetic data, we experimentally verify that GFDs provide an effective approach to detecting inconsistencies in knowledge and social graphs.",
"title": ""
},
{
"docid": "15f2f4ba8635366e5f2879d085511f46",
"text": "Vessel segmentation is a key step for various medical applications, it is widely used in monitoring the disease progression, and evaluation of various ophthalmologic diseases. However, manual vessel segmentation by trained specialists is a repetitive and time-consuming task. In the last two decades, many approaches have been introduced to segment the retinal vessels automatically. With the more recent advances in the field of neural networks and deep learning, multiple methods have been implemented with focus on the segmentation and delineation of the blood vessels. Deep Learning methods, such as the Convolutional Neural Networks (CNN), have recently become one of the new trends in the Computer Vision area. Their ability to find strong spatially local correlations in the data at different abstraction levels allows them to learn a set of filters that are useful to correctly segment the data, when given a labeled training set. In this dissertation, different approaches based on deep learning techniques for the segmentation of retinal blood vessels are studied. Furthermore, in this dissertation are also studied and evaluated the different techniques that have been used for vessel segmentation, based on machine learning (Random Forests and Support vector machine algorithms), and how these can be combined with the deep learning approaches.",
"title": ""
},
{
"docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06",
"text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.",
"title": ""
},
{
"docid": "05610fd0e6373291bdb4bc28cf1c691b",
"text": "In this work, we acknowledge the need for software engineers to devise specialized tools and techniques for blockchain-oriented software development. Ensuring effective testing activities, enhancing collaboration in large teams, and facilitating the development of smart contracts all appear as key factors in the future of blockchain-oriented software development.",
"title": ""
},
{
"docid": "26cc29177040461634929eb1fa13395d",
"text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.",
"title": ""
},
{
"docid": "f76b587a1bc282a98cf8e42bdd6f5032",
"text": "Ensemble-based methods are among the most widely used techniques for data stream classification. Their popularity is attributable to their good performance in comparison to strong single learners while being relatively easy to deploy in real-world applications. Ensemble algorithms are especially useful for data stream learning as they can be integrated with drift detection algorithms and incorporate dynamic updates, such as selective removal or addition of classifiers. This work proposes a taxonomy for data stream ensemble learning as derived from reviewing over 60 algorithms. Important aspects such as combination, diversity, and dynamic updates, are thoroughly discussed. Additional contributions include a listing of popular open-source tools and a discussion about current data stream research challenges and how they relate to ensemble learning (big data streams, concept evolution, feature drifts, temporal dependencies, and others).",
"title": ""
},
{
"docid": "5473962c6c270df695b965cbcc567369",
"text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm",
"title": ""
},
{
"docid": "0250d6bb0bcf11ca8af6c2661c1f7f57",
"text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.",
"title": ""
},
{
"docid": "0321ef8aeb0458770cd2efc35615e11c",
"text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.",
"title": ""
},
{
"docid": "e576b8677816ec54c7dcf52e633e6c9f",
"text": "OBJECTIVE\nThe objective of this study was to determine the level of knowledge, comfort, and training related to the medical management of child abuse among pediatrics, emergency medicine, and family medicine residents.\n\n\nMETHODS\nSurveys were administered to program directors and third-year residents at 67 residency programs. The resident survey included a 24-item quiz to assess knowledge regarding the medical management of physical and sexual child abuse. Sites were solicited from members of a network of child abuse physicians practicing at institutions with residency programs.\n\n\nRESULTS\nAnalyzable surveys were received from 53 program directors and 462 residents. Compared with emergency medicine and family medicine programs, pediatric programs were significantly larger and more likely to have a medical provider specializing in child abuse pediatrics, have faculty primarily responsible for child abuse training, use a written curriculum for child abuse training, and offer an elective rotation in child abuse. Exposure to child abuse training and abused patients was highest for pediatric residents and lowest for family medicine residents. Comfort with managing child abuse cases was lowest among family medicine residents. On the knowledge quiz, pediatric residents significantly outperformed emergency medicine and family medicine residents. Residents with high knowledge scores were significantly more likely to come from larger programs and programs that had a center, provider, or interdisciplinary team that specialized in child abuse pediatrics; had a physician on faculty responsible for child abuse training; used a written curriculum for child abuse training; and had a required rotation in child abuse pediatrics.\n\n\nCONCLUSIONS\nBy analyzing the relationship between program characteristics and residents' child abuse knowledge, we found that pediatric programs provide far more training and resources for child abuse education than emergency medicine and family medicine programs. As leaders, pediatricians must establish the importance of this topic in the pediatric education of residents of all specialties.",
"title": ""
},
{
"docid": "7ccd75f1626966b4ffb22f2788d64fdc",
"text": "Diabetes has affected over 246 million people worldwide with a majority of them being women. According to the WHO report, by 2025 this number is expected to rise to over 380 million. The disease has been named the fifth deadliest disease in the United States with no imminent cure in sight. With the rise of information technology and its continued advent into the medical and healthcare sector, the cases of diabetes as well as their symptoms are well documented. This paper aims at finding solutions to diagnose the disease by analyzing the patterns found in the data through classification analysis by employing Decision Tree and Naïve Bayes algorithms. The research hopes to propose a quicker and more efficient technique of diagnosing the disease, leading to timely treatment of the patients.",
"title": ""
},
{
"docid": "104fa95b500df05a052a230e80797f59",
"text": "Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "a361214a42392cbd0ba3e0775d32c839",
"text": "We propose a design methodology to exploit adaptive nanodevices (memristors), virtually immune to their variability. Memristors are used as synapses in a spiking neural network performing unsupervised learning. The memristors learn through an adaptation of spike timing dependent plasticity. Neurons' threshold is adjusted following a homeostasis-type rule. System level simulations on a textbook case show that performance can compare with traditional supervised networks of similar complexity. They also show the system can retain functionality with extreme variations of various memristors' parameters, thanks to the robustness of the scheme, its unsupervised nature, and the power of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes.",
"title": ""
},
{
"docid": "71b5708fb9d078b370689cac22a66013",
"text": "This paper presents a model, synthesized from the literature, of factors that explain how business analytics contributes to business value. It also reports results from a preliminary test of that model. The model consists of two parts: a process and a variance model. The process model depicts the analyze-insight-decision-action process through which use of an organization’s business-analytic capabilities create business value. The variance model proposes that the five factors in Davenport et al.’s (2010) DELTA model of BA success factors, six from Watson and Wixom (2007), and three from Seddon et al.’s (2010) model of organizational benefits from enterprise systems, assist a firm to gain business value from business analytics. A preliminary test of the model was conducted using data from 100 customer-success stories from vendors such as IBM, SAP, and Teradata. Our conclusion is that the model is likely to be a useful basis for future research.",
"title": ""
},
{
"docid": "7cfc2866218223ba6bd56eb1f10ce29f",
"text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.",
"title": ""
},
{
"docid": "577841609abb10a978ed54429f057def",
"text": "Smart environments integrates various types of technologies, including cloud computing, fog computing, and the IoT paradigm. In such environments, it is essential to organize and manage efficiently the broad and complex set of heterogeneous resources. For this reason, resources classification and categorization becomes a vital issue in the control system. In this paper we make an exhaustive literature survey about the various computing systems and architectures which defines any type of ontology in the context of smart environments, considering both, authors that explicitly propose resources categorization and authors that implicitly propose some resources classification as part of their system architecture. As part of this research survey, we have built a table that summarizes all research works considered, and which provides a compact and graphical snapshot of the current classification trends. The goal and primary motivation of this literature survey has been to understand the current state of the art and identify the gaps between the different computing paradigms involved in smart environment scenarios. As a result, we have found that it is essential to consider together several computing paradigms and technologies, and that there is not, yet, any research work that integrates a merged resources classification, taxonomy or ontology required in such heterogeneous scenarios.",
"title": ""
},
{
"docid": "6a240e0f0944117cf17f4ec1e613d94a",
"text": "This paper presents a simple method for “do as I do\" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis. Our video demo can be found at https://youtu.be/PCBTZh41Ris.",
"title": ""
}
] |
scidocsrr
|
6c71a1f3fd813d27efa4b205e5cb8dac
|
Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design
|
[
{
"docid": "adec3b3578d56cefed73fd74d270ca22",
"text": "In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability.",
"title": ""
}
] |
[
{
"docid": "5f393e79895bf234c0b96b7ece0d1cae",
"text": "Energy consumption of routers in commonly used mesh-based on-chip networks for chip multiprocessors is an increasingly important concern: these routers consist of a crossbar and complex control logic and can require significant buffers, hence high energy and area consumption. In contrast, an alternative design uses ring-based networks to connect network nodes with small and simple routers. Rings have been used in recent commercial designs, and are well-suited to smaller core counts. However, rings do not scale as efficiently as meshes. In this paper, we propose an energy-efficient yet high performance alternative to traditional mesh-based and ringbased on-chip networks. We aim to attain the scalability of meshes with the router simplicity and efficiency of rings. Our design is a hierarchical ring topology which consists of small local rings connected via one or more global ring. Routing between rings is accomplished using bridge routers that have minimal buffering, and use deflection in place of buffered flow control for simplicity. We comprehensively explore new issues in the design of such a topology, including the design of the routers, livelock freedom, energy, performance and scalability. We propose new router microarchitectures and show that these routers are significantly simpler and more area and energy efficient than both buffered and bufferless mesh based routers. We develop new mechanisms to preserve livelock-free routing in our topology and router design. Our evaluations compare our proposal to a traditional ring network and conventional buffered and bufferless mesh based networks, showing that our proposal reduces average network power by 52.4% (30.4%) and router area footprint by 70.5% from a buffered mesh in 16-node (64-node) configurations, while also improving system performance by 0.6% (5.0%).",
"title": ""
},
{
"docid": "9a2ab1d198468819f32a2b74334528ae",
"text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "46768aeb3c9295a38ff64b3e40a34ec1",
"text": "Google's monolithic repository provides a common source of truth for tens of thousands of developers around the world.",
"title": ""
},
{
"docid": "09f743b18655305b7ad1e39432756525",
"text": "Several applications of chalcones and their derivatives encouraged researchers to increase their synthesis as an alternative for the treatment of pathogenic bacterial and fungal infections. In the present study, chalcone derivatives were synthesized through cross aldol condensation reaction between 4-(N,N-dimethylamino)benzaldehyde and multiarm aromatic ketones. The multiarm aromatic ketones were synthesized through nucleophilic substitution reaction between 4-hydroxy acetophenone and benzyl bromides. The benzyl bromides, multiarm aromatic ketones, and corresponding chalcone derivatives were evaluated for their activities against eleven clinical pathogenic Gram-positive, Gram-negative bacteria, and three pathogenic fungi by the disk diffusion method. The minimum inhibitory concentration was determined by the microbroth dilution technique. The results of the present study demonstrated that benzyl bromide derivatives have strong antibacterial and antifungal properties as compared to synthetic chalcone derivatives and ketones. Benzyl bromides (1a and 1c) showed high ester activity against Gram-positive bacteria and fungi but moderate activity against Gram-negative bacteria. Therefore, these compounds may be considered as good antibacterial and antifungal drug discovery. However, substituted ketones (2a-b) as well as chalcone derivatives (3a-c) showed no activity against all the tested strains except for ketone (2c), which showed moderate activity against Candida albicans.",
"title": ""
},
{
"docid": "d88523afba42431989f5d3bd22f2ad85",
"text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.",
"title": ""
},
{
"docid": "4bdccdda47aea04c5877587daa0e8118",
"text": "Recognizing text character from natural scene images is a challenging problem due to background interferences and multiple character patterns. Scene Text Character (STC) recognition, which generally includes feature representation to model character structure and multi-class classification to predict label and score of character class, mostly plays a significant role in word-level text recognition. The contribution of this paper is a complete performance evaluation of image-based STC recognition, by comparing different sampling methods, feature descriptors, dictionary sizes, coding and pooling schemes, and SVM kernels. We systematically analyze the impact of each option in the feature representation and classification. The evaluation results on two datasets CHARS74K and ICDAR2003 demonstrate that Histogram of Oriented Gradient (HOG) descriptor, soft-assignment coding, max pooling, and Chi-Square Support Vector Machines (SVM) obtain the best performance among local sampling based feature representations. To improve STC recognition, we apply global sampling feature representation. We generate Global HOG (GHOG) by computing HOG descriptor from global sampling. GHOG enables better character structure modeling and obtains better performance than local sampling based feature representations. The GHOG also outperforms existing methods in the two benchmark datasets.",
"title": ""
},
{
"docid": "dcda412c18e92650d9791023f13e4392",
"text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "af81774bce83971009c26fba730bfba3",
"text": "In this paper, we present a stereo visual-inertial odometry algorithm assembled with three separated Kalman filters, i.e., attitude filter, orientation filter, and position filter. Our algorithm carries out the orientation and position estimation with three filters working on different fusion intervals, which can provide more robustness even when the visual odometry estimation fails. In our orientation estimation, we propose an improved indirect Kalman filter, which uses the orientation error space represented by unit quaternion as the state of the filter. The performance of the algorithm is demonstrated through extensive experimental results, including the benchmark KITTI datasets and some challenging datasets captured in a rough terrain campus.",
"title": ""
},
{
"docid": "b776bf3acb830552eb1ecf353b08edee",
"text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.",
"title": ""
},
{
"docid": "c3218724e6237c3d51eb41bed1cd5268",
"text": "Recently, wireless sensor networks (WSNs) have become mature enough to go beyond being simple fine-grained continuous monitoring platforms and become one of the enabling technologies for disaster early-warning systems. Event detection functionality of WSNs can be of great help and importance for (near) real-time detection of, for example, meteorological natural hazards and wild and residential fires. From the data-mining perspective, many real world events exhibit specific patterns, which can be detected by applying machine learning (ML) techniques. In this paper, we introduce ML techniques for distributed event detection in WSNs and evaluate their performance and applicability for early detection of disasters, specifically residential fires. To this end, we present a distributed event detection approach incorporating a novel reputation-based voting and the decision tree and evaluate its performance in terms of detection accuracy and time complexity.",
"title": ""
},
{
"docid": "8e8b199787fcc8bf813037fbc26d1be3",
"text": "Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include “burn-in demonstrations” upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.",
"title": ""
},
{
"docid": "8434630dc54c3015a50d04abba004aca",
"text": "Wolfram syndrome, also known by the mnemonic DIDMOAD (diabetes insipidus, diabetes mellitus, optic atrophy and deafness) is a rare progressive neurodegenerative disorder. This syndrome is further divided to WFS1 and WFS2 based on the different genetic molecular basis and clinical features. In this report, we described a known case of Wolfram syndrome requiring anesthesia for cochlear implantation. Moreover, a brief review of molecular genetics and anesthetic considerations are presented.",
"title": ""
},
{
"docid": "9f3e9e7c493b3b62c7ec257a00f43c20",
"text": "The wind stroke is a common syndrome in clinical disease; the physicians of past generations accumulated much experience in long-term clinical practice and left abundant literature. Looking from this literature, the physicians of past generations had different cognitions of the wind stroke, especially the concept of wind stroke. The connotation of wind stroke differed at different stages, going through a gradually changing process from exogenous disease, true wind stroke, apoplectic wind stroke to cerebral apoplexy.",
"title": ""
},
{
"docid": "bdaa8b87cdaef856b88b7397ddc77d97",
"text": "In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.",
"title": ""
},
{
"docid": "d34759a882df6bc482b64530999bcda3",
"text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.",
"title": ""
},
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "d24ca3024b5abc27f6eb2ad5698a320b",
"text": "Purpose. To study the fracture behavior of the major habit faces of paracetamol single crystals using microindentation techniques and to correlate this with crystal structure and molecular packing. Methods. Vicker's microindentation techniques were used to measure the hardness and crack lengths. The development of all the major radial cracks was analyzed using the Laugier relationship and fracture toughness values evaluated. Results. Paracetamol single crystals showed severe cracking and fracture around all Vicker's indentations with a limited zone of plastic deformation close to the indent. This is consistent with the material being a highly brittle solid that deforms principally by elastic deformation to fracture rather than by plastic flow. Fracture was associated predominantly with the (010) cleavage plane, but was also observed parallel to other lattice planes including (110), (210) and (100). The cleavage plane (010) had the lowest fracture toughness value, Kc = 0.041MPa m1/2, while the greatest value, Kc = 0.105MPa m1/2; was obtained for the (210) plane. Conclusions. Paracetamol crystals showed severe cracking and fracture because of the highly brittle nature of the material. The fracture behavior could be explained on the basis of the molecular packing arrangement and the calculated attachment energies across the fracture planes.",
"title": ""
}
] |
scidocsrr
|
b8d4821e7398675fb93265e3ed8ba517
|
PoseShop: Human Image Database Construction and Personalized Content Synthesis
|
[
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
}
] |
[
{
"docid": "000bdac12cd4254500e22b92b1906174",
"text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.",
"title": ""
},
{
"docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e",
"text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.",
"title": ""
},
{
"docid": "b79fb02d0b89d288b1733c3194e304ec",
"text": "In this paper, the idea of a Prepaid energy meter using an AT89S52 microcontroller has been introduced. This concept provides a cost efficient manner of electricity billing. The present energy billing systems are discrete, inaccurate, costly and slow. They are also time and labour consuming. The major drawback of traditional billing system is power and energy theft. This drawback is reduced by using a prepaid energy meter which is based on the concept “Pay first and then use it”. Prepaid energy meter also reduces the error made by humans while taking readings to a large extent and there is no need to take reading in it. The prepaid energy meter uses a recharge card which is available in various ranges (i.e. Rs. 50, Rs. 100, Rs. 200, etc.). The recharge is done by using a keypad and the meter is charged with the amount. According to the power consumption, the amount will be reduced. An LDR (light Dependant Resistor) circuit counts the amount of energy consumed and displays the remaining amount of energy on the LCD. A relay system has been used which shut down or disconnect the energy meter and load through supply mains when the recharge amount is depleted. A buzzer is used as an alarm which starts before the recharge amount reaches a minimum value.",
"title": ""
},
{
"docid": "60edfab6fa5f127dd51a015b20d12a68",
"text": "We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.",
"title": ""
},
{
"docid": "35aa75f5bd79c8d97e374c33f5bad615",
"text": "Historically, much attention has been given to the unit processes and the integration of those unit processes to improve product yield. Less attention has been given to the wafer environment, either during or post processing. This paper contains a detailed discussion on how particles and Airborne Molecular Contaminants (AMCs) from the wafer environment interact and produce undesired effects on the wafer. Sources of wafer environmental contamination are the process itself, ambient environment, outgassing from wafers, and FOUP contamination. Establishing a strategy that reduces contamination inside the FOUP will increase yield and decrease defect variability. Three primary variables that greatly impact this strategy are FOUP contamination mitigation, FOUP material, and FOUP metrology and cleaning method.",
"title": ""
},
{
"docid": "034bf47c5982756a1cf1c1ccd777d604",
"text": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",
"title": ""
},
{
"docid": "4b2b199aeb61128cbee7691bc49e16f5",
"text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.",
"title": ""
},
{
"docid": "ca9da9f8113bc50aaa79d654a9eaf95a",
"text": "Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.",
"title": ""
},
{
"docid": "bbe59dd74c554d92167f42701a1f8c3d",
"text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.",
"title": ""
},
{
"docid": "80af9f789b334aae324b549fffe4511a",
"text": "The research community is interested in developing automatic systems for the detection of events in video. This is particularly important in the field of sports data analytics. This paper presents an approach for identifying major complex events in soccer videos, starting from object detection and spatial relations between objects. The proposed framework, firstly, detects objects from each single video frame providing a set of candidate objects with associated confidence scores. The event detection system, then, detects events by means of rules which are based on temporal and logical combinations of the detected objects and their relative distances. The effectiveness of the framework is preliminary demonstrated over different events like \"Ball possession\" and \"Kicking the ball\".",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "a2376c57c3c1c51f57f84788f4c6669f",
"text": "Text categorization is a significant tool to manage and organize the surging text data. Many text categorization algorithms have been explored in previous literatures, such as KNN, Naïve Bayes and Support Vector Machine. KNN text categorization is an effective but less efficient classification method. In this paper, we propose an improved KNN algorithm for text categorization, which builds the classification model by combining constrained one pass clustering algorithm and KNN text categorization. Empirical results on three benchmark corpuses show that our algorithm can reduce the text similarity computation substantially and outperform the-state-of-the-art KNN, Naïve Bayes and Support Vector Machine classifiers. In addition, the classification model constructed by the proposed algorithm can be updated incrementally, and it is valuable in practical application.",
"title": ""
},
{
"docid": "cc06553e4d03bf8541597d01de4d5eae",
"text": "Several technologies are used today to improve safety in transportation systems. The development of a system for drivability based on both V2V and V2I communication is considered an important task for the future. V2X communication will be a next step for the transportation safety in the nearest time. A lot of different structures, architectures and communication technologies for V2I based systems are under development. Recently a global paradigm shift known as the Internet-of-Things (IoT) appeared and its integration with V2I communication could increase the safety of future transportation systems. This paper brushes up on the state-of-the-art of systems based on V2X communications and proposes an approach for system architecture design of a safe intelligent driver assistant system using IoT communication. In particular, the paper presents the design process of the system architecture using IDEF modeling methodology and data flows investigations. The proposed approach shows the system design based on IoT architecture reference model.",
"title": ""
},
{
"docid": "db857ce571add6808493f64d9e254655",
"text": "(MANETs). MANET is a temporary network with a group of wireless infrastructureless mobile nodes that communicate with each other within a rapidly dynamic topology. The FMLB protocol distributes transmitted packets over multiple paths through the mobile nodes using Fibonacci sequence. Such distribution can increase the delivery ratio since it reduces the congestion. The FMLB protocol's responsibility is balancing the packet transmission over the selected paths and ordering them according to hops count. The shortest path is used frequently more than other ones. The simulation results show that the proposed protocol has achieved an enhancement on packet delivery ratio, up to 21%, as compared to the Ad Hoc On-demand Distance Vector routing protocol (AODV) protocol. Also the results show the effect of nodes pause time on the data delivery. Finally, the simulation results are obtained by the well-known Glomosim Simulator, version 2.03, without any distance or location measurements devices.",
"title": ""
},
{
"docid": "5552216832bb7315383d1c4f2bfe0635",
"text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.",
"title": ""
},
{
"docid": "cea53ea6ff16808a2dbc8680d3ef88ee",
"text": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "0952701dd63326f8a78eb5bc9a62223f",
"text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.",
"title": ""
},
{
"docid": "154f5455f593e8ebf7058cc0a32426a2",
"text": "Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed with the spread of various sensors and Cloud computing technologies. However, difficulties arise because of the limitation of the network bandwidth between the sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose distributed deep learning processing between sensors and the Cloud in a pipeline manner to reduce the amount of data sent to the Cloud and protect the privacy of the users. In this paper, we have developed a pipeline-based distributed processing method for the Caffe deep learning framework and investigated the processing times of the classification by varying a division point and the parameters of the network models using data sets, CIFAR-10 and ImageNet. The experiments show that the accuracy of deep learning with coarse-grain data is comparable to that with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth with actual sensors and a Cloud environment.",
"title": ""
},
{
"docid": "11ddbce61cb175e9779e0fcb5622436f",
"text": "When rewards are sparse and efficient exploration essential, deep Q-learning with -greedy exploration tends to fail. This poses problems for otherwise promising domains such as task-oriented dialog systems, where the primary reward signal, indicating successful completion, typically occurs only at the end of each episode but depends on the entire sequence of utterances. A poor agent encounters such successful dialogs rarely, and a random agent may never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialog systems. First, we demonstrate that exploration by Thompson sampling, using Monte Carlo samples from a Bayes-by-Backprop neural network, yields marked improvement over standard DQNs with Boltzmann or -greedy exploration. Second, we show that spiking the replay buffer with a small number of successes, as are easy to harvest for dialog tasks, can make Q-learning feasible when it might otherwise fail catastrophically.",
"title": ""
}
] |
scidocsrr
|
9833a2433885a7438b81d64f39712970
|
Theoretical Design of Broadband Multisection Wilkinson Power Dividers With Arbitrary Power Split Ratio
|
[
{
"docid": "786d1ba82d326370684395eba5ef7cd3",
"text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.",
"title": ""
}
] |
[
{
"docid": "6850b52405e8056710f4b3010858cfbe",
"text": "spread of misinformation, rumors and hoaxes. The goal of this work is to introduce a simple modeling framework to study the diffusion of hoaxes and in particular how the availability of debunking information may contain their diffusion. As traditionally done in the mathematical modeling of information diffusion processes, we regard hoaxes as viruses: users can become infected if they are exposed to them, and turn into spreaders as a consequence. Upon verification, users can also turn into non-believers and spread the same attitude with a mechanism analogous to that of the hoax-spreaders. Both believers and non-believers, as time passes, can return to a susceptible state. Our model is characterized by four parameters: spreading rate, gullibility, probability to verify a hoax, and that to forget one's current belief. Simulations on homogeneous, heterogeneous, and real networks for a wide range of parameters values reveal a threshold for the fact-checking probability that guarantees the complete removal of the hoax from the network. Via a mean field approximation, we establish that the threshold value does not depend on the spreading rate but only on the gullibility and forgetting probability. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax.",
"title": ""
},
{
"docid": "28b2bbcfb8960ff40f2fe456a5b00729",
"text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation",
"title": ""
},
{
"docid": "ca990b1b43ca024366a2fe73e2a21dae",
"text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.",
"title": ""
},
{
"docid": "c4ab0d1934e5c2eb4fc16915f1868ab8",
"text": "During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being staticor tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user’s position, illumination factors, and response times, demonstrate HoloMed’s effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.",
"title": ""
},
{
"docid": "4a5c784fd5678666b57c841dfc26f5e8",
"text": "This paperdemonstratesa methodology tomodel and evaluatethe faulttolerancecharacteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major so_ problems and error distributions, we develop two leveis of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a dislributed environmenL Based oft the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated. Results show that I/O management and program flow control are the major sources of software problems in the measured IBM/MVS and VAX/VMS operating systems, while memory management is the major source of software problems in the TandeJn/GUARDIAN operating system. Software errors tend to occur in bursts on both IBM and VAX machines. This phenomemm islesspronounced in theTandem system,which can be attributed to its fault-tolerant design. The fault tolerance in the Tandem system reduces the loss of service due to software failures by an order of magnitude. Although the measured Tandem system is an experimental system working under accelerated stresses, the loss of service due to software problems is much smaller than that in the measured VAX/VMS and IBM/MVS systems. It is shown that the softwme Time To Error distributions obtained _rom data are not simple exponentials. This is in contrast with the conunon assumption of exponential failure times made in fanh-tolerant software models. Investigation of error conelatiom show that about 10% of software failures in the VAXcluster and 20% in the Tandem system occuned conctmeafly on multiple machines. The network-related software in the VAXcluster and the memory management software in the Tandem system are suspected to be software reliability bottlenecks for concurrent failures.",
"title": ""
},
{
"docid": "b27dc4a19b44bf2fd13f299de8c33108",
"text": "A large proportion of the world’s population lives in remote rural areas that are geographically isolated and sparsely populated. This paper proposed a hybrid power generation system suitable for remote area application. The concept of hybridizing renewable energy sources is that the base load is to be covered by largest and firmly available renewable source(s) and other intermittent source(s) should augment the base load to cover the peak load of an isolated mini electric grid system. The study is based on modeling, simulation and optimization of renewable energy system in rural area in Sundargarh district of Orissa state, India. The model has designed to provide an optimal system conFigureuration based on hour-by-hour data for energy availability and demands. Various renewable/alternative energy sources, energy storage and their applicability in terms of cost and performance are discussed. The homer software is used to study and design the proposed hybrid alternative energy power system model. The Sensitivity analysis was carried out using Homer program. Based on simulation results, it has been found that renewable/alternative energy sources will replace the conventional energy sources and would be a feasible solution for distribution of electric power for stand alone applications at remote and distant locations.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "5399b924cdf1d034a76811360b6c018d",
"text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.",
"title": ""
},
{
"docid": "64dc0a4b8392efc03b20fef7437eb55c",
"text": "This paper investigates how retailers at different stages of e-commerce maturity evaluate their entry to e-commerce activities. The study was conducted using qualitative approach interviewing 16 retailers in Saudi Arabia. It comes up with 22 factors that are believed the most influencing factors for retailers in Saudi Arabia. Interestingly, there seem to be differences between retailers in companies at different maturity stages in terms of having different attitudes regarding the issues of using e-commerce. The businesses that have reached a high stage of e-commerce maturity provide practical evidence of positive and optimistic attitudes and practices regarding use of e-commerce, whereas the businesses that have not reached higher levels of maturity provide practical evidence of more negative and pessimistic attitudes and practices. The study, therefore, should contribute to efforts leading to greater e-commerce development in Saudi Arabia and other countries with similar context.",
"title": ""
},
{
"docid": "c21c58dbdf413a54036ac5e6849f81e1",
"text": "We discuss the problem of extending data mining approaches to cases in which data points arise in the form of individual graphs. Being able to find the intrinsic low-dimensionality in ensembles of graphs can be useful in a variety of modeling contexts, especially when coarse-graining the detailed graph information is of interest. One of the main challenges in mining graph data is the definition of a suitable pairwise similarity metric in the space of graphs. We explore two practical solutions to solving this problem: one based on finding subgraph densities, and one using spectral information. The approach is illustrated on three test data sets (ensembles of graphs); two of these are obtained from standard graph generating algorithms, while the graphs in the third example are sampled as dynamic snapshots from an evolving network simulation.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ff4c2f1467a141894dbe76491bc06d3b",
"text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.",
"title": ""
},
{
"docid": "c9cd19c2e8ee4b07f969280672d521bf",
"text": "The owner and users of a sensor network may be different, which necessitates privacy-preserving access control. On the one hand, the network owner need enforce strict access control so that the sensed data are only accessible to users willing to pay. On the other hand, users wish to protect their respective data access patterns whose disclosure may be used against their interests. This paper presents DP2AC, a Distributed Privacy-Preserving Access Control scheme for sensor networks, which is the first work of its kind. Users in DP2AC purchase tokens from the network owner whereby to query data from sensor nodes which will reply only after validating the tokens. The use of blind signatures in token generation ensures that tokens are publicly verifiable yet unlinkable to user identities, so privacy-preserving access control is achieved. A central component in DP2AC is to prevent malicious users from reusing tokens, for which we propose a suite of distributed token reuse detection (DTRD) schemes without involving the base station. These schemes share the essential idea that a sensor node checks with some other nodes (called witnesses) whether a token has been used, but they differ in how the witnesses are chosen. We thoroughly compare their performance with regard to TRD capability, communication overhead, storage overhead, and attack resilience. The efficacy and efficiency of DP2AC are confirmed by detailed performance evaluations.",
"title": ""
},
{
"docid": "21e17ad2d2a441940309b7eacd4dec6e",
"text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial",
"title": ""
},
{
"docid": "b7bf3ae864ce774874041b0e5308323f",
"text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.",
"title": ""
},
{
"docid": "85fc78cc3f71b784063b8b564e6509a9",
"text": "Numerous research papers have listed different vectors of personally identifiable information leaking via tradition al and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular We b sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing d iverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56% of the sites directly leak pieces of private information with this result growing to 75% if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privac y protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their use rs.",
"title": ""
},
{
"docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2",
"text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.",
"title": ""
},
{
"docid": "9b17c6ff30e91f88e52b2db4eb331478",
"text": "Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. After an initial pre-processing phase on data, packets are fed into Deep Packet framework that embeds stacked autoencoder and convolution neural network (CNN) in order to classify network traffic. Deep packet with CNN as its classification model achieved F1 score of 0.95 in application identification task and it also accomplished F1 score of 0.97 in traffic characterization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.",
"title": ""
},
{
"docid": "9fd5e182851ff0be67e8865c336a1f77",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "3d04155f68912f84b02788f93e9da74c",
"text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.",
"title": ""
}
] |
scidocsrr
|
ec6484ba5c85d5feffa574b53588b534
|
Houdini, an Annotation Assistant for ESC/Java
|
[
{
"docid": "cb1952a4931955856c6479d7054c57e7",
"text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.",
"title": ""
}
] |
[
{
"docid": "3fcce3664db5812689c121138e2af280",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "e7968b6bfb3535907b380cfd93128b0e",
"text": "We present a novel solution to the problem of depth reconstruction from a single image. Single view 3D reconstruction is an ill-posed problem. We address this problem by using an example-based synthesis approach. Our method uses a database of objects from a single class (e.g. hands, human figures) containing example patches of feasible mappings from the appearance to the depth of each object. Given an image of a novel object, we combine the known depths of patches from similar objects to produce a plausible depth estimate. This is achieved by optimizing a global target function representing the likelihood of the candidate depth. We demonstrate how the variability of 3D shapes and their poses can be handled by updating the example database on-the-fly. In addition, we show how we can employ our method for the novel task of recovering an estimate for the occluded backside of the imaged objects. Finally, we present results on a variety of object classes and a range of imaging conditions.",
"title": ""
},
{
"docid": "89596e6eedbc1f13f63ea144b79fdc64",
"text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.",
"title": ""
},
{
"docid": "24e31f9cdedcc7aa8f9489db9db13f94",
"text": "A basic ingredient in transformational leadership development consists in identifying leadership qualities via distribution of the multifactor leadership questionnaire (MLQ) to followers of the target leaders. It is vital that the MLQ yields an accurate and unbiased assessment of leaders on the various leadership dimensions. This article focuses on two sources of bias which may occur in identifying leadership qualities. First, when followers assess the strengths and weaknesses of their leaders, they may have difficulty in differentiating between the various transformational and transactional leadership behaviours. It is found that this is only the case for the transformational leadership attributes because the four transformational leadership dimensions measured by the MLQ correlate highly and cluster into one factor. MLQ ratings on the three transactional leadership dimensions are found not to be interrelated and show evidence for three distinct factors: contingent reward, active management-by-exception and passive leadership. Second, social desirability does not seem to be a strong biasing factor, although the transformational leadership scale is somewhat more socially desirable. These findings emphasize that the measurement of so-called “new” leadership qualities remains a controversial issue in leadership development. Practical implications of these findings and avenues for future research are also discussed.",
"title": ""
},
{
"docid": "44e7e452b9b27d2028d15c88256eff30",
"text": "In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.",
"title": ""
},
{
"docid": "32fd7a91091f74a5ea55226aa44403d3",
"text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.",
"title": ""
},
{
"docid": "3df8f7669b6a9d3509cf72eaa8d94248",
"text": "Current forensic tools for examination of embedded systems like mobile phones and PDA’s mostly perform data extraction on a logical level and do not consider the type of storage media during data analysis. This paper suggests a low level approach for the forensic examination of flash memories and describes three low-level data acquisition methods for making full memory copies of flash memory devices. Results are presented of a file system study in which USB memory sticks from 45 different make and models were used. For different mobile phones is shown how full memory copies of their flash memories can be made and which steps are needed to translate the extracted data into a format that can be understood by common forensic media analysis tools. Artifacts, caused by flash specific operations like block erasing and wear leveling, are discussed and directions are given for enhanced data recovery and analysis on data originating from flash memory.",
"title": ""
},
{
"docid": "7a180e503a0b159d545047443524a05a",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "5c8923335dd4ee4c2123b5b3245fb595",
"text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.",
"title": ""
},
{
"docid": "24411f7fe027e5eb617cf48c3e36ce05",
"text": "Reliability assessment of distribution system, based on historical data and probabilistic methods, leads to an unreliable estimation of reliability indices since the data for the distribution components are usually inaccurate or unavailable. Fuzzy logic is an efficient method to deal with the uncertainty in reliability inputs. In this paper, the ENS index along with other commonly used indices in reliability assessment are evaluated for the distribution system using fuzzy logic. Accordingly, the influential variables on the failure rate and outage duration time of the distribution components, which are natural or human-made, are explained using proposed fuzzy membership functions. The reliability indices are calculated and compared for different cases of the system operations by simulation on the IEEE RBTS Bus 2. The results of simulation show how utilities can significantly improve the reliability of their distribution system by considering the risk of the influential variables.",
"title": ""
},
{
"docid": "ef771fa11d9f597f94cee5e64fcf9fd6",
"text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.",
"title": ""
},
{
"docid": "53b6315bfb8fcfef651dd83138b11378",
"text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: matkbn@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: dpachamanova@babson.edu. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: dscsimm@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.",
"title": ""
},
{
"docid": "913e167521f0ce7a7f1fb0deac58ae9c",
"text": "Prospect theory is a descriptive theory of how individuals choose among risky alternatives. The theory challenged the conventional wisdom that economic decision makers are rational expected utility maximizers. We present a number of empirical demonstrations that are inconsistent with the classical theory, expected utility, but can be explained by prospect theory. We then discuss the prospect theory model, including the value function and the probability weighting function. We conclude by highlighting several applications of the theory.",
"title": ""
},
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
},
{
"docid": "c8f9d10de0d961e4ee14b6b118b5f89a",
"text": "Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices. In this work, we present the DeepX toolkit (DXTK); an opensource collection of software components for simplifying the execution of deep models on resource-sensitive platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. It also offers a range of runtime options for executing deep models on range of devices including both Android and Linux variants. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). Each technique offers a complementary approach to shaping system resource requirements, and is compatible with deep and convolutional neural networks. We hope that DXTK proves to be a valuable resource for the community, and accelerates the adoption and study of resource-constrained deep learning.",
"title": ""
},
{
"docid": "ddf09617b266d483d5e3ab3dcb479b69",
"text": "Writing a research article can be a daunting task, and often, writers are not certain what should be included and how the information should be conveyed. Fortunately, scientific and engineering journal articles follow an accepted format. They contain an introduction which includes a statement of the problem, a literature review, and a general outline of the paper, a methods section detailing the methods used, separate or combined results, discussion and application sections, and a final summary and conclusions section. Here, each of these elements is described in detail using examples from the published literature as illustration. Guidance is also provided with respect to style, getting started, and the revision/review process.",
"title": ""
},
{
"docid": "16de36d6bf6db7c294287355a44d0f61",
"text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-",
"title": ""
},
{
"docid": "c718b84951edfe294b8287ef3f5a9c6a",
"text": "Dynamic Searchable Symmetric Encryption (DSSE) allows a client to perform keyword searches over encrypted files via an encrypted data structure. Despite its merits, DSSE leaks search and update patterns when the client accesses the encrypted data structure. These leakages may create severe privacy problems as already shown, for example, in recent statistical attacks on DSSE. While Oblivious Random Access Memory (ORAM) can hide such access patterns, it incurs significant communication overhead and, therefore, it is not yet fully practical for cloud computing systems. Hence, there is a critical need to develop private access schemes over the encrypted data structure that can seal the leakages of DSSE while achieving practical search/update operations.\n In this paper, we propose a new oblivious access scheme over the encrypted data structure for searchable encryption purposes, that we call <u>D</u>istributed <u>O</u>blivious <u>D</u>ata structure <u>DSSE</u> (DOD-DSSE). The main idea is to create a distributed encrypted incidence matrix on two non-colluding servers such that no arbitrary queries on these servers can be linked to each other. This strategy prevents not only recent statistical attacks on the encrypted data structure but also other potential threats exploiting query linkability. Our security analysis proves that DOD-DSSE ensures the unlink-ability of queries and, therefore, offers much higher security than traditional DSSE. At the same time, our performance evaluation demonstrates that DOD-DSSE is two orders of magnitude faster than ORAM-based techniques (e.g., Path ORAM), since it only incurs a small-constant number of communication overhead. That is, we deployed DOD-DSSE on geographically distributed Amazon EC2 servers, and showed that, a search/update operation on a very large dataset only takes around one second with DOD-DSSE, while it takes 3 to 13 minutes with Path ORAM-based methods.",
"title": ""
},
{
"docid": "6dfb4c016db41a27587ef08011a7cf0e",
"text": "The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.",
"title": ""
},
{
"docid": "b82f7b7a317715ba0c7ca87db92c7bf6",
"text": "Regions of hypoxia in tumours can be modelled in vitro in 2D cell cultures with a hypoxic chamber or incubator in which oxygen levels can be regulated. Although this system is useful in many respects, it disregards the additional physiological gradients of the hypoxic microenvironment, which result in reduced nutrients and more acidic pH. Another approach to hypoxia modelling is to use three-dimensional spheroid cultures. In spheroids, the physiological gradients of the hypoxic tumour microenvironment can be inexpensively modelled and explored. In addition, spheroids offer the advantage of more representative modelling of tumour therapy responses compared with 2D culture. Here, we review the use of spheroids in hypoxia tumour biology research and highlight the different methodologies for spheroid formation and how to obtain uniformity. We explore the challenge of spheroid analyses and how to determine the effect on the hypoxic versus normoxic components of spheroids. We discuss the use of high-throughput analyses in hypoxia screening of spheroids. Furthermore, we examine the use of mathematical modelling of spheroids to understand more fully the hypoxic tumour microenvironment.",
"title": ""
}
] |
scidocsrr
|
f64b0e6c0e0bb7b264772bd594817e45
|
Cluster-based sampling of multiclass imbalanced data
|
[
{
"docid": "f6f6f322118f5240aec5315f183a76ab",
"text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.",
"title": ""
}
] |
[
{
"docid": "18f9fff4bd06f28cd39c97ff40467d0f",
"text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "1c5a717591aa049303af7239ff203ebb",
"text": "Indian Biotech opponents have attributed the increase of suicides to the monopolization of GM seeds, centering on patent control, application of terminator technology, marketing strategy, and increased production costs. The contentions of the biotech opponents, however, have been criticized for a lack of transparency in their modus operandi i.e. the use of methodology in their argumentation. The fact is, however, that with the intention of getting the attention of those capable of determining the future of GM cotton in India, opponents resorted to generating controversies. Therefore, this article will review and evaluate the multifaceted contentions of both opponents and defenders. Although the association between seed monopolization and farmer-suicide is debatable, we will show that there is a link between the economic factors associated with Bt. cultivation and farmer suicide. The underlying thesis of biotech opponents becomes all the more significant when analysed vis-à-vis the contention of the globalization critics that there has been a political and economic marginalization of the Indian farmers. Their accusation assumes significance in the context of a fragile democracy like India where market forces are accorded precedence over farmers' needs until election time.",
"title": ""
},
{
"docid": "ca8da405a67d3b8a30337bc23dfce0cc",
"text": "Object detection is one of the most important tasks of computer vision. It is usually performed by evaluating a subset of the possible locations of an image, that are more likely to contain the object of interest. Exhaustive approaches have now been superseded by object proposal methods. The interplay of detectors and proposal algorithms has not been fully analyzed and exploited up to now, although this is a very relevant problem for object detection in video sequences. We propose to connect, in a closed-loop, detectors and object proposal generator functions exploiting the ordered and continuous nature of video sequences. Different from tracking we only require a previous frame to improve both proposal and detection: no prediction based on local motion is performed, thus avoiding tracking errors. We obtain three to four points of improvement in mAP and a detection time that is lower than Faster Regions with CNN features (R-CNN), which is the fastest Convolutional Neural Network (CNN) based generic object detector known at the moment.",
"title": ""
},
{
"docid": "ad02d315182c1b6181c6dda59185142c",
"text": "Fact checking is an essential part of any investigative work. For linguistic, psychological and social reasons, it is an inherently human task. Yet, modern media make it increasingly difficult for experts to keep up with the pace at which information is produced. Hence, we believe there is value in tools to assist them in this process. Much of the effort on Web data research has been focused on coping with incompleteness and uncertainty. Comparatively, dealing with context has received less attention, although it is crucial in judging the validity of a claim. For instance, what holds true in a US state, might not in its neighbors, e.g., due to obsolete or superseded laws. In this work, we address the problem of checking the validity of claims in multiple contexts. We define a language to represent and query facts across different dimensions. The approach is non-intrusive and allows relatively easy modeling, while capturing incompleteness and uncertainty. We describe the syntax and semantics of the language. We present algorithms to demonstrate its feasibility, and we illustrate its usefulness through examples.",
"title": ""
},
{
"docid": "9b9a04a859b51866930b3fb4d93653b6",
"text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.",
"title": ""
},
{
"docid": "6c1317ef88110756467a10c4502851bb",
"text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.",
"title": ""
},
{
"docid": "d603806f579a937a24ad996543fe9093",
"text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.",
"title": ""
},
{
"docid": "67070d149bcee51cc93a81f21f15ad71",
"text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "609110c4bf31885d99618994306ef2cc",
"text": "This study examined the ability of a collagen solution to aid revascularization of necrotic-infected root canals in immature dog teeth. Sixty immature teeth from 6 dogs were infected, disinfected, and randomized into experimental groups: 1: no further treatment; 2: blood in canal; 3: collagen solution in canal, 4: collagen solution + blood, and 5: negative controls (left for natural development). Uncorrected chi-square analysis of radiographic results showed no statistical differences (p >or= 0.05) between experimental groups regarding healing of radiolucencies but a borderline statistical difference (p = 0.058) for group 1 versus group 4 for radicular thickening. Group 2 showed significantly more apical closure than group 1 (p = 0.03) and a borderline statistical difference (p = 0.051) for group 3 versus group 1. Uncorrected chi-square analysis revealed that there were no statistical differences between experimental groups for histological results. However, some roots in each of groups 1 to 4 (previously infected) showed positive histologic outcomes (thickened walls in 43.9%, apical closure in 54.9%, and new luminal tissue in 29.3%). Revascularization of disinfected immature dog root canal systems is possible.",
"title": ""
},
{
"docid": "eb0ef9876f37b5974ed27079bcda8e03",
"text": "Increasing number of individuals are using the internet to meet their health information needs; however, little is known about the characteristics of online health information seekers and whether they differ from individuals who search for health information from offline sources. Researchers must examine the primary characteristics of online and offline health information seekers in order to better recognize their needs, highlight improvements that may be made in the arena of internet health information quality and availability, and understand factors that discriminate between those who seek online vs. offline health information. This study examines factors that differentiate between online and offline health information seekers in the United States. Data for this study are from a subsample (n = 385) of individuals from the 2000 General Social Survey. The subsample includes those respondents who were asked Internet and health seeking module questions. Similar to prior research, results of this study show that the majority of both online and offline health information seekers report reliance upon health care professionals as a source of health information. This study is unique in that the results illustrate that there are several key factors (age, income, and education) that discriminate between US online and offline health information seekers; this suggests that general \"digital divide\" characteristics influence where health information is sought. In addition to traditional digital divide factors, those who are healthier and happier are less likely to look exclusively offline for health information. Implications of these findings are discussed in terms of the digital divide and the patient-provider relationship.",
"title": ""
},
{
"docid": "a35bdf118e84d71b161fea1b9e798a1a",
"text": "Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper.",
"title": ""
},
{
"docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f",
"text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.",
"title": ""
},
{
"docid": "62218093e4d3bf81b23512043fc7a013",
"text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.",
"title": ""
},
{
"docid": "ba0051fdc72efa78a7104587042cea64",
"text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.",
"title": ""
},
{
"docid": "f10d79d1eb6d3ec994c1ec7ec3769437",
"text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]",
"title": ""
},
{
"docid": "8410b8b76ab690ed4389efae15608d13",
"text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).",
"title": ""
},
{
"docid": "bde5a1876e93f10ad5942c416063bef6",
"text": "This paper describes an innovative agent-based architecture for mixed-initiative interaction between a human and a robot that interacts via a graphical user interface (GUI). Mixed-initiative interaction typically refers to a flexible interaction strategy between a human and a computer to contribute what is best-suited at the most appropriate time [1]. In this paper, we extend this concept to human-robot interaction (HRI). When compared to pure humancomputer interaction, HRIs encounter additional difficulty, as the user must assess the situation at the robot’s remote location via limited sensory feedback. We propose an agent-based adaptive human-robot interface for mixed-initiative interaction to address this challenge. The proposed adaptive user interface (UI) architecture provides a platform for developing various agents that control robots and user interface components (UICs). Such components permit the human and the robot to communicate missionrelevant information.",
"title": ""
},
{
"docid": "2b2c30fa2dc19ef7c16cf951a3805242",
"text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.",
"title": ""
}
] |
scidocsrr
|
0216ea0249466bf849388281f98a4f11
|
An Object Co-occurrence Assisted Hierarchical Model for Scene Understanding
|
[
{
"docid": "c8d9ec6aa63b783e4c591dccdbececcf",
"text": "The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object’s relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba’s proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.",
"title": ""
},
{
"docid": "77d354505cdd474c1b381b415f115ca0",
"text": "Scene recognition is a highly valuable perceptual ability for an indoor mobile robot, however, current approaches for scene recognition present a significant drop in performance for the case of indoor scenes. We believe that this can be explained by the high appearance variability of indoor environments. This stresses the need to include high-level semantic information in the recognition process. In this work we propose a new approach for indoor scene recognition based on a generative probabilistic hierarchical model that uses common objects as an intermediate semantic representation. Under this model, we use object classifiers to associate low-level visual features to objects, and at the same time, we use contextual relations to associate objects to scenes. As a further contribution, we improve the performance of current state-of-the-art category-level object classifiers by including geometrical information obtained from a 3D range sensor that facilitates the implementation of a focus of attention mechanism within a Monte Carlo sampling scheme. We test our approach using real data, showing significant advantages with respect to previous state-of-the-art methods.",
"title": ""
}
] |
[
{
"docid": "026628151680da901c741766248f0055",
"text": "We analyzea corpusof referringexpressionscollected from userinteractionswith a multimodal travel guide application.Theanalysissuggeststhat,in dramaticcontrastto normalmodesof human-humaninteraction,the interpretationof referringexpressionscanbecomputed with very high accuracy usinga modelwhich pairsan impoverishednotionof discoursestatewith asimpleset of rulesthatareinsensiti ve to the type of referringexpressionused. We attribute this result to the implicit mannerin which theinterfaceconveys thesystem’ s beliefs abouttheoperati ve discoursestate,to which users tailor their choiceof referringexpressions.This result offersnew insightinto thewaycomputerinterfacescan shapea user’ s languagebehavior, insightswhich can be exploited to bring otherwisedifficult interpretation problemsinto therealmof tractability.",
"title": ""
},
{
"docid": "93a9fdca133adfd8b6e7b8f030e95622",
"text": "Prostate segmentation from Magnetic Resonance (MR) images plays an important role in image guided intervention. However, the lack of clear boundary specifically at the apex and base, and huge variation of shape and texture between the images from different patients make the task very challenging. To overcome these problems, in this paper, we propose a deeply supervised convolutional neural network (CNN) utilizing the convolutional information to accurately segment the prostate from MR images. The proposed model can effectively detect the prostate region with additional deeply supervised layers compared with other approaches. Since some information will be abandoned after convolution, it is necessary to pass the features extracted from early stages to later stages. The experimental results show that significant segmentation accuracy improvement has been achieved by our proposed method compared to other reported approaches.",
"title": ""
},
{
"docid": "e162fcb6b897e941cd26558f4ed16cd5",
"text": "In this paper, we propose a novel real-valued time-delay neural network (RVTDNN) suitable for dynamic modeling of the baseband nonlinear behaviors of third-generation (3G) base-station power amplifiers (PA). Parameters (weights and biases) of the proposed model are identified using the back-propagation algorithm, which is applied to the input and output waveforms of the PA recorded under real operation conditions. Time- and frequency-domain simulation of a 90-W LDMOS PA output using this novel neural-network model exhibit a good agreement between the RVTDNN behavioral model's predicted results and measured ones along with a good generality. Moreover, dynamic AM/AM and AM/PM characteristics obtained using the proposed model demonstrated that the RVTDNN can track and account for the memory effects of the PAs well. These characteristics also point out that the small-signal response of the LDMOS PA is more affected by the memory effects than the PAs large-signal response when it is driven by 3G signals. This RVTDNN model requires a significantly reduced complexity and shorter processing time in the analysis and training procedures, when driven with complex modulated and highly varying envelope signals such as 3G signals, than previously published neural-network-based PA models.",
"title": ""
},
{
"docid": "178dc3f162f0a4bd2a43ae4da72478cc",
"text": "Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and c ontrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a far more effecti ve regulariser which does not suffer from the same limitations.",
"title": ""
},
{
"docid": "413a08e904839edb6fd2e031d8bdc807",
"text": "A data collection instrument that a respondent self-completes through the visual channel, such as on paper or over the Web, is visually administered. Although insightful in many ways, traditional methods of evaluating questionnaires, such as cognitive interviewing, usability testing, and experimentation may be insufficient when it comes to evaluating the design of visually administered questionnaires because these methods cannot directly identify information respondents perceive or the precise order in which they observe the information (Redline et al 1998). In this paper, we present the results of a study that was conducted to explore whether eye-movement analysis might prove a promising new tool for evaluating the design of visually administered questionnaires. Eye tracking hardware and software, which were originally developed at the Human for use with computer monitors, were adapted to track the eye movements of respondents answering three versions of a paper questionnaire. These versions were chosen for study because differences in the design of their branching instructions were hypothesized to affect eye-movements, which in turn may affect the accuracy of following the branching instructions (Redline and Dillman Forthcoming). Background Eye-movement analysis has been used in other fields, most notably reading and scene perception, to study cognitive processing (e.g., Rayner 1992; Rayner 1983). However, survey design research grew out of the interviewer-administered realm, which has been primarily focused on respondents' comprehension of the spoken language of questionnaires. Therefore, the mechanism by which respondents perceive information presented on paper questionnaires or over the Web, the eyes and their movements, has not received much attention until recently. Other reasons for the lack of eye-movement research in the survey field are its cost and relative difficulty. As others have noted, eye-movement research requires specialized knowledge, equipment and expertise to operate the equipment. In addition,",
"title": ""
},
{
"docid": "25e50a3e98b58f833e1dd47aec94db21",
"text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"title": ""
},
{
"docid": "7603ee2e0519b727de6dc29e05b2049f",
"text": "To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.",
"title": ""
},
{
"docid": "ffbebb5d8f4d269353f95596c156ba5c",
"text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "ab75cb747666f6b115a94f1dfb627d63",
"text": "Over the last years, Enterprise Social Networks (ESN) have gained increasing attention both in academia and practice, resulting in a large number of publications dealing with ESN. Among them is a large number of case studies describing the benefits of ESN in each individual case. Based on the different research objects they focus, various benefits are described. However, an overview of the benefits achieved by using ESN is missing and will, thus, be elaborated in this article (research question 1). Further, we cluster the identified benefits to more generic categories and finally classify them to the capabilities of traditional IT as presented by Davenport and Short (1990) to determine if new capabilities of IT arise using ESN (research question 2). To address our research questions, we perform a qualitative content analysis on 37 ESN case studies. As a result, we identify 99 individual benefits, classify them to the capabilities of traditional IT, and define a new IT capability named Social Capital. Our results can, e.g., be used to align and expand current ESN success measurement approaches.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "ea29dbae2b19f4b8af208aa551744a07",
"text": "This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) formulation for the problem of learning an unknown functional dependency between a structured input space and a structured output space, in the Semi-Supervised Learning setting. Our formulation includes as special cases Vector-valued Manifold Regularization and Multi-view Learning, thus provides in particular a unifying framework linking these two important learning approaches. In the case of least square loss function, we provide a closed form solution with an efficient implementation. Numerical experiments on challenging multi-class categorization problems show that our multi-view learning formulation achieves results which are comparable with state of the art and are significantly better than single-view learning.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "8a41d0190ae25baf0a270d9524ea99d3",
"text": "Hybrid AC/DC microgrid is a compromised solution to cater for the increasing penetration of DC-compatible energy sources, storages and loads. In this paper, DC/DC converter with High Frequency Transformer (DHFT) is proposed to replace the conventional bulky transformer for bus voltage matching and galvanic isolation. Various DHFT topologies have been compared and CLLC-type has been recommended due to its capabilities of bidirectional power flow, seamless transition and low switching loss. Different operating scenarios of the hybrid AC/DC microgrid have been analyzed and DHFT open-loop control has been selected to simplify systematic coordination. DHFT are designed in order to maximize the conversion efficiency and minimize output voltage variations in different loading conditions. Lab-scale prototypes of the DHFT and hybrid AC/DC microgrid have been developed for experimental verifications. The performances of DHFT and system in both steady state and transient states have been confirmed.",
"title": ""
},
{
"docid": "2e2e8219b7870529e8ca17025190aa1b",
"text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.",
"title": ""
},
{
"docid": "4ed98f4c2e09f8f3b81f2f7faa2ad573",
"text": "The current nursing shortage and high turnover is of great concern in many countries because of its impact upon the efficiency and effectiveness of any health-care delivery system. Recruitment and retention of nurses are persistent problems associated with job satisfaction. This paper analyses the growing literature relating to job satisfaction among nurses and concludes that more research is required to understand the relative importance of the many identified factors to job satisfaction. It is argued that the absence of a robust causal model incorporating organizational, professional and personal variables is undermining the development of interventions to improve nurse retention.",
"title": ""
},
{
"docid": "eae0f8a921b301e52c822121de6c6b58",
"text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.",
"title": ""
},
{
"docid": "18c190df7c133085d418c58357b4c81c",
"text": "Attention can be improved by repetition of a specific task that involves an attention network (network training), or by exercise or meditation that changes the brain state (state training). We first review the concept of attention networks that link changes in orienting, alerting and executive control to brain networks. Network training through video games or computer exercises can improve aspects of attention. The extent of transfer beyond the trained task is a controversial issue. Mindfulness is a form of meditation that keeps attention focused on the current moment. Some forms of meditation have been shown to improve executive attention reduce stress and produce specific brain changes. Additional research is needed to understand the limits and mechanisms of these effects.",
"title": ""
},
{
"docid": "1aaa0e23d795121fbe5673873ea2aea7",
"text": "The fifth generation of mobile networks is planned to be commercially available in a few years. The scope of 5G goes beyond introducing new radio interfaces, and will include new services like low-latency industrial applications, as well as new deployment models such as cooperative cells and densification through small cells. An efficient realization of these new features greatly benefit from tight coordination among radio and transport network resources, something that is missing in current networks. In this article, we first present an overview of the benefits and technical requirements of resource coordination across radio and transport networks in the context of 5G. Then, we discuss how SDN principles can bring programmability to both the transport and radio domains, which in turn enables the design of a hierarchical, modular, and programmable control and orchestration plane across the domains. Finally, we introduce two use cases of SDN-based transport and RAN orchestration, and present an experimental implementation of them in a testbed in our lab, which confirms the feasibility and benefits of the proposed orchestration.",
"title": ""
},
{
"docid": "12363d704fcfe9fef767c5e27140c214",
"text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.",
"title": ""
}
] |
scidocsrr
|
bb3b89dd1acf40f12a44eab4bf91d616
|
Big data and digital forensics
|
[
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
}
] |
[
{
"docid": "5931cb779b24065c5ef48451bc46fac4",
"text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.",
"title": ""
},
{
"docid": "938afbc53340a3aa6e454d17789bf021",
"text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.",
"title": ""
},
{
"docid": "235ff4cb1c0091f95caffd528ed95755",
"text": "Natural language is a common type of input for data processing systems. Therefore, it is often required to have a large testing data set of this type. In this context, the task to automatically generate natural language texts, which maintain the properties of real texts is desirable. However, current synthetic data generators do not capture natural language text data sufficiently. In this paper, we present a preliminary study on different generative models for text generation, which maintain specific properties of natural language text, i.e., the sentiment of a review text. In a series of experiments using different data sets and sentiment analysis methods, we show that generative models can generate texts with a specific sentiment and that hidden Markov model based text generation achieves less accuracy than Markov chain based text generation, but can generate a higher number of distinct texts.",
"title": ""
},
{
"docid": "2a8c5de43ce73c360a5418709a504fa8",
"text": "The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.",
"title": ""
},
{
"docid": "f8622acd0d0c2811b6ae2d0b5d4c9a6b",
"text": "Squalene is a linear triterpene that is extensively utilized as a principal component of parenteral emulsions for drug and vaccine delivery. In this review, the chemical structure and sources of squalene are presented. Moreover, the physicochemical and biological properties of squalene-containing emulsions are evaluated in the context of parenteral formulations. Historical and current parenteral emulsion products containing squalene or squalane are discussed. The safety of squalene-based products is also addressed. Finally, analytical techniques for characterization of squalene emulsions are examined.",
"title": ""
},
{
"docid": "b657aeceeee6c29330cf45dcc40d6198",
"text": "A small form-factor 60-GHz SiGe BiCMOS radio with two antennas-in-package is presented. The fully-integrated feature-rich transceiver provides a complete RF solution for mobile WiGig/IEEE 802.11ad applications.",
"title": ""
},
{
"docid": "72e6c3c800cd981b1e1dd379d3bbf304",
"text": "Brain activity recorded noninvasively is sufficient to control a mobile robot if advanced robotics is used in combination with asynchronous electroencephalogram (EEG) analysis and machine learning techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEG-based systems have been considered too slow for controlling rapid and complex sequences of movements. We show that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states. Mental control was comparable to manual control on the same task with a performance ratio of 0.74.",
"title": ""
},
{
"docid": "8c8ece47107bc1580e925e42d266ec87",
"text": "How do brains shape social networks, and how do social ties shape the brain? Social networks are complex webs by which ideas spread among people. Brains comprise webs by which information is processed and transmitted among neural units. While brain activity and structure offer biological mechanisms for human behaviors, social networks offer external inducers or modulators of those behaviors. Together, these two axes represent fundamental contributors to human experience. Integrating foundational knowledge from social and developmental psychology and sociology on how individuals function within dyads, groups, and societies with recent advances in network neuroscience can offer new insights into both domains. Here, we use the example of how ideas and behaviors spread to illustrate the potential of multilayer network models.",
"title": ""
},
{
"docid": "44de39859665488f8df950007d7a01c6",
"text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,",
"title": ""
},
{
"docid": "8b67be5c3adac9bcdbc1aa836708987d",
"text": "The adaptive toolbox is a Darwinian-inspired theory that conceives of the mind as a modular system that is composed of heuristics, their building blocks, and evolved capacities. The study of the adaptive toolbox is descriptive and analyzes the selection and structure of heuristics in social and physical environments. The study of ecological rationality is prescriptive and identifies the structure of environments in which specific heuristics either succeed or fail. Results have been used for designing heuristics and environments to improve professional decision making in the real world.",
"title": ""
},
{
"docid": "f9d1777be40b879aee2f6e810422d266",
"text": "This study intended to examine the effect of ground colour on memory performance. Most of the past research on colour-memory relationship focus on the colour of the figure rather than the background. Based on these evidences, this study try to extend the previous works to the ground colour and how its effect memory performance based on recall rate. 90 undergraduate students will participate in this study. The experimental design will be used is multiple independent group experimental design. Fifty geometrical shapes will be used in the study phase with measurement of figure, 4.74cm x 3.39cm and ground, 19cm x 25cm. The participants will be measured on numbers of shape that are being recall in test phase in three experimental conditions, coloured background, non-coloured background and mix between coloured and non-coloured background slides condition. It is hypothesized that shape with coloured background will be recalled better than shape with non-coloured background. Analysis of variance (ANOVA) statistical procedure will be used to analyse the data of recall performance between three experimental groups using Statistical Package for Social Sciences (SPSS 17.0) to examine the cause and effect relationship between those variables.",
"title": ""
},
{
"docid": "874e60d3f37aa01d201294ed247eb6a4",
"text": "FokI is a type IIs restriction endonuclease comprised of a DNA recognition domain and a catalytic domain. The structural similarity of the FokI catalytic domain to the type II restriction endonuclease BamHI monomer suggested that the FokI catalytic domains may dimerize. In addition, the FokI structure, presented in an accompanying paper in this issue of Proceedings, reveals a dimerization interface between catalytic domains. We provide evidence here that FokI catalytic domain must dimerize for DNA cleavage to occur. First, we show that the rate of DNA cleavage catalyzed by various concentrations of FokI are not directly proportional to the protein concentration, suggesting a cooperative effect for DNA cleavage. Second, we constructed a FokI variant, FokN13Y, which is unable to bind the FokI recognition sequence but when mixed with wild-type FokI increases the rate of DNA cleavage. Additionally, the FokI catalytic domain that lacks the DNA binding domain was shown to increase the rate of wild-type FokI cleavage of DNA. We also constructed an FokI variant, FokD483A, R487A, which should be defective for dimerization because the altered residues reside at the putative dimerization interface. Consistent with the FokI dimerization model, the variant FokD483A, R487A revealed greatly impaired DNA cleavage. Based on our work and previous reports, we discuss a pathway of DNA binding, dimerization, and cleavage by FokI endonuclease.",
"title": ""
},
{
"docid": "cd92f750461aff9877853f483cf09ecf",
"text": "Designing and maintaining Web applications is one of the major challenges for the software industry of the year 2000. In this paper we present Web Modeling Language (WebML), a notation for specifying complex Web sites at the conceptual level. WebML enables the high-level description of a Web site under distinct orthogonal dimensions: its data content (structural model), the pages that compose it (composition model), the topology of links between pages (navigation model), the layout and graphic requirements for page rendering (presentation model), and the customization features for one-to-one content delivery (personalization model). All the concepts of WebML are associated with a graphic notation and a textual XML syntax. WebML specifications are independent of both the client-side language used for delivering the application to users, and of the server-side platform used to bind data to pages, but they can be effectively used to produce a site implementation in a specific technological setting. WebML guarantees a model-driven approach to Web site development, which is a key factor for defining a novel generation of CASE tools for the construction of complex sites, supporting advanced features like multi-device access, personalization, and evolution management. The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft.",
"title": ""
},
{
"docid": "42ebaee6fdbfc487ae2a21e8a55dd3e4",
"text": "Human motion prediction, forecasting human motion in a few milliseconds conditioning on a historical 3D skeleton sequence, is a long-standing problem in computer vision and robotic vision. Existing forecasting algorithms rely on extensive annotated motion capture data and are brittle to novel actions. This paper addresses the problem of few-shot human motion prediction, in the spirit of the recent progress on few-shot learning and meta-learning. More precisely, our approach is based on the insight that having a good generalization from few examples relies on both a generic initial model and an effective strategy for adapting this model to novel tasks. To accomplish this, we propose proactive and adaptive meta-learning (PAML) that introduces a novel combination of model-agnostic meta-learning and model regression networks and unifies them into an integrated, end-to-end framework. By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks, while effectively adapting this model for use as a task-specific one by leveraging learningto-learn knowledge about how to transform few-shot model parameters to many-shot model parameters. The resulting PAML predictor model significantly improves the prediction performance on the heavily benchmarked H3.6M dataset in the small-sample size regime.",
"title": ""
},
{
"docid": "eda6795cb79e912a7818d9970e8ca165",
"text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.",
"title": ""
},
{
"docid": "61bde9866c99e98aac813a9410d33189",
"text": ": Steganography is an art and science of writing hidden messages in such a way that no one apart from the intended recipient knows the existence of the message.The maximum number of bits that can be used for LSB audio steganography without causing noticeable perceptual distortion to the host audio signal is 4 LSBs, if 16 bits per sample audio sequences are used.We propose two novel approaches of substit ution technique of audio steganography that improves the capacity of cover audio for embedding additional data. Using these methods, message bits are embedded into multiple and variable LSBs. These methods utilize upto 7 LSBs for embedding data.Results show that both these methods improve capacity of data hiding of cover audio by 35% to 70% as compared to the standerd LSB algorithm with 4 LSBs used for data embedding. And using encryption and decryption techniques performing cryptography. So for this RSA algorithm used. KeywordsInformation hiding,Audio steganography,Least significant bit(LSB),Most significant bit(MSB)",
"title": ""
},
{
"docid": "64ae34c959e0e4c9a6a155eeb334b3ea",
"text": "Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a twochannel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.",
"title": ""
},
{
"docid": "19ea9b23f8757804c23c21293834ff3f",
"text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.",
"title": ""
},
{
"docid": "b6cc41414ad1dae4ccd2fcf4df1bd3b6",
"text": "Bio-implantable sensors using radio-frequency telemetry links that enable the continuous monitoring and recording of physiological data are receiving a great deal of attention. The objective of this paper is to study the feasibility of an implantable sensor for tissue characterization. This has been done by querying an LC sensor surrounded by dispersive tissues by an external antenna. The resonant frequency of the sensor is monitored by measuring the input impedance of the antenna, and correlated to the desired quantities. Using an equivalent circuit model of the sensor that accounts for the properties of the encapsulating tissue, analytical expressions have been developed for the extraction of the tissue permittivity and conductivity. Finally, experimental validation has been performed with a telemetry link that consists of a loop antenna and a fabricated LC sensor immersed in single and multiple dispersive phantom materials.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] |
scidocsrr
|
8241a37781dba5ef020939ffeabcf0a2
|
Regional Grey Matter Structure Differences between Transsexuals and Healthy Controls—A Voxel Based Morphometry Study
|
[
{
"docid": "6d45e9d4d1f46debcbf1b95429be60fd",
"text": "Sex differences in cortical thickness (CTh) have been extensively investigated but as yet there are no reports on CTh in transsexuals. Our aim was to determine whether the CTh pattern in transsexuals before hormonal treatment follows their biological sex or their gender identity. We performed brain magnetic resonance imaging on 94 subjects: 24 untreated female-to-male transsexuals (FtMs), 18 untreated male-to-female transsexuals (MtFs), and 29 male and 23 female controls in a 3-T TIM-TRIO Siemens scanner. T1-weighted images were analyzed to obtain CTh and volumetric subcortical measurements with FreeSurfer software. CTh maps showed control females have thicker cortex than control males in the frontal and parietal regions. In contrast, males have greater right putamen volume. FtMs had a similar CTh to control females and greater CTh than males in the parietal and temporal cortices. FtMs had larger right putamen than females but did not differ from males. MtFs did not differ in CTh from female controls but had greater CTh than control males in the orbitofrontal, insular, and medial occipital regions. In conclusion, FtMs showed evidence of subcortical gray matter masculinization, while MtFs showed evidence of CTh feminization. In both types of transsexuals, the differences with respect to their biological sex are located in the right hemisphere.",
"title": ""
}
] |
[
{
"docid": "2ff290ba8bab0de760c289bff3feee06",
"text": "Bayesian Networks are being used extensively for reasoning under uncertainty. Inference mechanisms for Bayesian Networks are compromised by the fact that they can only deal with propositional domains. In this work, we introduce an extension of that formalism, Hierarchical Bayesian Networks, that can represent additional information about the structure of the domains of variables. Hierarchical Bayesian Networks are similar to Bayesian Networks, in that they represent probabilistic dependencies between variables as a directed acyclic graph, where each node of the graph corresponds to a random variable and is quanti ed by the conditional probability of that variable given the values of its parents in the graph. What extends the expressive power of Hierarchical Bayesian Networks is that a node may correspond to an aggregation of simpler types. A component of one node may itself represent a composite structure; this allows the representation of complex hierarchical domains. Furthermore, probabilistic dependencies can be expressed at any level, between nodes that are contained in the same structure.",
"title": ""
},
{
"docid": "b2ebad4a19cdfce87e6b69a25ba6ab49",
"text": "Collaborative filtering have become increasingly important with the development of Web 2.0. Online shopping service providers aim to provide users with quality list of recommended items that will enhance user satisfaction and loyalty. Matrix factorization approaches have become the dominant method as they can reduce the dimension of the data set and alleviate the sparsity problem. However, matrix factorization approaches are limited because they depict each user as one preference vector. In practice, we observe that users may have different preferences when purchasing different subsets of items, and the periods between purchases also vary from one user to another. In this work, we propose a probabilistic approach to learn latent clusters in the large user-item matrix, and incorporate temporal information into the recommendation process. Experimental results on a real world dataset demonstrate that our approach significantly improves the conversion rate, precision and recall of state-of-the-art methods.",
"title": ""
},
{
"docid": "3e442c589eb4b2501b6ed2a8f1774e73",
"text": "Today, sensors are increasingly used for data collection. In the medical domain, for example, vital signs (e.g., pulse or oxygen saturation) of patients can be measured with sensors and used for further processing. In this paper, different types of applications will be discussed whether sensors might be used in the context of these applications and their suitability for applying external sensors to them. Furthermore, a system architecture for adding sensor technology to respective applications is presented. For this purpose, a real-world business application scenario in the field of well-being and fitness is presented. In particular, we integrated two different sensors in our fitness application. We report on the lessons learned from the implementation and use of this application, e.g., in respect to connection and data structure. They mainly deal with problems relating to the connection and communication between the smart mobile device and the external sensors, as well as the selection of the appropriate type of application. Finally, a robust sensor framework, arising from this fitness application is presented. This framework provides basic features for connecting sensors. Particularly, in the medical domain, it is crucial to provide an easy to use toolset to relieve medical staff.",
"title": ""
},
{
"docid": "0c28741df3a9bf999f4abe7b840cfb26",
"text": "In this work, we analyze taxi-GPS traces collected in Lisbon, Portugal. We perform an exploratory analysis to visualize the spatiotemporal variation of taxi services; explore the relationships between pick-up and drop-off locations; and analyze the behavior in downtime (between the previous drop-off and the following pick-up). We also carry out the analysis of predictability of taxi trips for the next pick-up area type given history of taxi flow in time and space.",
"title": ""
},
{
"docid": "e8459c80dc392cac844b127bc5994a5d",
"text": "Database security has become a vital issue in modern Web applications. Critical business data in databases is an evident target for attack. Therefore, ensuring the confidentiality, privacy and integrity of data is a major issue for the security of database systems. Recent high profile data thefts have shown that perimeter defenses are insufficient to secure sensitive data. This paper studies security of the databases shared between many parties from a cryptographic perspective. We propose Mixed Cryptography Database (MCDB), a novel framework to encrypt databases over untrusted networks in a mixed form using many keys owned by different parties. The encryption process is based on a new data classification according to the data owner. The proposed framework is very useful in strengthening the protection of sensitive data even if the database server is attacked at multiple points from the inside or outside.",
"title": ""
},
{
"docid": "c1c241d9275e154a3fc2ca41a22b2c43",
"text": "Population counts and longitude and latitude coordinates were estimated for the 50 largest cities in the United States by computational linguistic techniques and by human participants. The mathematical technique Latent Semantic Analysis applied to newspaper texts produced similarity ratings between the 50 cities that allowed for a multidimensional scaling (MDS) of these cities. MDS coordinates correlated with the actual longitude and latitude of these cities, showing that cities that are located together share similar semantic contexts. This finding was replicated using a first-order co-occurrence algorithm. The computational estimates of geographical location as well as population were akin to human estimates. These findings show that language encodes geographical information that language users in turn may use in their understanding of language and the world.",
"title": ""
},
{
"docid": "0e56ef5556c34274de7d7dceff17317e",
"text": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given caption— i.e., we try to “imagine” how a sentence would be depicted visually—and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for groundedmodels over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.",
"title": ""
},
{
"docid": "4bab29f0689f301683370e73fa045bcc",
"text": "Over the past decade, the traditional purchasing and logistics functions have evolved into a broader strategic approach to materials and distribution management known as supply chain management. This research reviews the literature base and development of supply chain management from two separate paths that eventually merged into the modern era of a holistic and strategic approach to operations, materials and logistics management. In addition, this article attempts to clearly describe supply chain management since the literature is replete with buzzwords that address elements or stages of this new management philosophy. This article also discusses various supply chain management strategies and the conditions conducive to supply chain management. ( 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "bd039cbb3b9640e917b9cc15e45e5536",
"text": "We introduce adversarial neural networks for representation learning as a novel approach to transfer learning in brain-computer interfaces (BCIs). The proposed approach aims to learn subject-invariant representations by simultaneously training a conditional variational autoencoder (cVAE) and an adversarial network. We use shallow convolutional architectures to realize the cVAE, and the learned encoder is transferred to extract subject-invariant features from unseen BCI users’ data for decoding. We demonstrate a proof-of-concept of our approach based on analyses of electroencephalographic (EEG) data recorded during a motor imagery BCI experiment.",
"title": ""
},
{
"docid": "9e10ca5f3776df0fe0ca41a8046adb27",
"text": "The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.",
"title": ""
},
{
"docid": "dd5e9984bbafb6b6aa8030e9a47c6230",
"text": "The variational Bayesian (VB) approximation is known to be a promising approach to Bayesian estimation, when the rigorous calculation of the Bayes posterior is intractable. The VB approximation has been successfully applied to matrix factorization (MF), offering automatic dimensionality selection for principal component analysis. Generally, finding the VB solution is a non-convex problem, and most methods rely on a local search algorithm derived through a standard procedure for the VB approximation. In this paper, we show that a better option is available for fully-observed VBMF—the global solution can be analytically computed. More specifically, the global solution is a reweighted SVD of the observed matrix, and each weight can be obtained by solving a quartic equation with its coefficients being functions of the observed singular value. We further show that the global optimal solution of empirical VBMF (where hyperparameters are also learned from data) can also be analytically computed. We illustrate the usefulness of our results through experiments in multi-variate analysis.",
"title": ""
},
{
"docid": "4b9d994288fc555c89554cc2c7e41712",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "c7cfc79579704027bf28fc7197496b8c",
"text": "There is a growing trend nowadays for patients to seek the least invasive treatments possible with less risk of complications and downtime to correct rhytides and ptosis characteristic of aging. Nonsurgical face and neck rejuvenation has been attempted with various types of interventions. Suture suspension of the face, although not a new idea, has gained prominence with the advent of the so called \"lunch-time\" face-lift. Although some have embraced this technique, many more express doubts about its safety and efficacy limiting its widespread adoption. The present review aims to evaluate several clinical parameters pertaining to thread suspensions such as longevity of results of various types of polypropylene barbed sutures, their clinical efficacy and safety, and the risk of serious adverse events associated with such sutures. Early results of barbed suture suspension remain inconclusive. Adverse events do occur though mostly minor, self-limited, and of short duration. Less clear are the data on the extent of the peak correction and the longevity of effect, and the long-term effects of the sutures themselves. The popularity of barbed suture lifting has waned for the time being. Certainly, it should not be presented as an alternative to a face-lift.",
"title": ""
},
{
"docid": "72600a23cc70d9cc3641cbfc7f23ba4d",
"text": "Primary cicatricial alopecias (PCAs) are a rare, but important, group of disorders that cause irreversible damage to hair follicles resulting in scarring and permanent hair loss. They may also signify an underlying systemic disease. Thus, it is of paramount importance that clinicians who manage patients with hair loss are able to diagnose these disorders accurately. Unfortunately, PCAs are notoriously difficult conditions to diagnose and treat. The aim of this review is to present a rational and pragmatic guide to help clinicians in the professional assessment, investigation and diagnosis of patients with PCA. Illustrating typical clinical and histopathological presentations of key PCA entities we show how dermatoscopy can be profitably used for clinical diagnosis. Further, we advocate the search for loss of follicular ostia as a clinical hallmark of PCA, and suggest pragmatic strategies that allow rapid formulation of a working diagnosis.",
"title": ""
},
{
"docid": "bcbbc8913330378af7c986549ab4bb30",
"text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.",
"title": ""
},
{
"docid": "4ecd27822fee036150b1c8f3db70c679",
"text": "Despite the proliferation of e-services, they are still characterized by uncertainties. As result, consumer trust beliefs are considered an important determinant of e-service adoption. Past work has not however considered the potentially dynamic nature of these trust beliefs, and how early-stage trust might influence later-stage adoption and use. To address this gap, this study draws on the theory of reasoned action and expectation-confirmation theory to carry out a longitudinal study of trust in eservices. Specifically, we examine how trust interacts with other consumer beliefs, such as perceived usefulness, and how together these beliefs influence consumer intentions and actual behaviours toward e-services at both initial and later stages of use. The empirical context is online health information services. Data collection was carried out at two time periods, approximately 7 weeks apart using a student population. The results show that perceived usefulness and trust are important at both initial and later stages in consumer acceptance of online health services. Consumers’ actual usage experiences modify perceptions of usefulness and influence the confirmation of their initial expectations. These results have implications for our understanding of the dynamic nature of trust and perceived usefulness, and their roles in long term success of e-services.",
"title": ""
},
{
"docid": "4c4a28724bf847de8e57765f869c4f3f",
"text": "Emotional sensitivity, emotion regulation and impulsivity are fundamental topics in research of borderline personality disorder (BPD). Studies using fMRI examining the neural correlates concerning these topics is growing and has just begun understanding the underlying neural correlates in BPD. However, there are strong similarities but also important differences in results of different studies. It is therefore important to know in more detail what these differences are and how we should interpret these. In present review a critical light is shed on the fMRI studies examining emotional sensitivity, emotion regulation and impulsivity in BPD patients. First an outline of the methodology and the results of the studies will be given. Thereafter important issues that remained unanswered and topics to improve future research are discussed. Future research should take into account the limited power of previous studies and focus more on BPD specificity with regard to time course responses, different regulation strategies, manipulation of self-regulation, medication use, a wider range of stimuli, gender effects and the inclusion of a clinical control group.",
"title": ""
},
{
"docid": "9f52ee95148490555c10f699678b640d",
"text": "Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.",
"title": ""
},
{
"docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3",
"text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.",
"title": ""
}
] |
scidocsrr
|
53c8c26f761c6e2259cebfecda1502d1
|
New Constructions and Proof Methods for Large Universe Attribute-Based Encryption
|
[
{
"docid": "1600d4662fc5939c5f737756e2d3e823",
"text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.",
"title": ""
}
] |
[
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "98571cb7f32b389683e8a9e70bd87339",
"text": "We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"title": ""
},
{
"docid": "2b1caf45164e7453453eaaf006dc3827",
"text": "This paper presents an estimation of the longitudinal movement of an aircraft using the STM32 microcontroller F1 Family. The focus of this paper is on developing code to implement the famous Luenberger Observer and using the different devices existing in STM32 F1 micro-controllers. The suggested Luenberger observer was achieved using the Keil development tools designed for devices microcontrollers based on the ARM processor and labor with C / C ++ language. The Characteristics that show variations in time of the state variables and step responses prove that the identification of the longitudinal movement of an aircraft were performed with minor errors in the right conditions. These results lead to easily develop predictive algorithms for programmable hardware in the industry.",
"title": ""
},
{
"docid": "7ecfea8abc9ba29719cdd4bf02e99d5d",
"text": "The literature shows an increase in blended learning implementations (N = 74) at faculties of education in Turkey whereas pre-service and in-service teachers’ ICT competencies have been identified as one of the areas where they are in need of professional development. This systematic review was conducted to find out the impact of blended learning on academic achievement and attitudes at teacher education programs in Turkey. 21 articles and 10 theses complying with all pre-determined criteria (i.e., studies having quantitative research design or at least a quantitative aspect conducted at pre-service teacher education programs) included within the scope of this review. With regard to academic achievement, it was synthesized that majority of the studies confirmed its positive impact on attaining course outcomes. Likewise, blended learning environment was revealed to contribute pre-service teachers to develop positive attitudes towards the courses. It was also concluded that face-to-face aspect of the courses was favoured considerably as it enhanced social interaction between peers and teachers. Other benefits of blended learning were listed as providing various materials, receiving prompt feedback, and tracking progress. Slow internet access, connection failure and anxiety in some pre-service teachers on using ICT were reported as obstacles. Regarding the positive results of blended learning and the significance of ICT integration, pre-service teacher education curricula are suggested to be reconstructed by infusing ICT into entire program through blended learning rather than delivering isolated ICT courses which may thus serve for prospective teachers as catalysts to integrate the use of ICT in their own teaching.",
"title": ""
},
{
"docid": "842a1d2da67d614ecbc8470987ae85e9",
"text": "The task of recovering three-dimensional (3-D) geometry from two-dimensional views of a scene is called 3-D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3-D reconstruction algorithms available in the literature. These algorithms are often designed to provide different tradeoffs between speed, accuracy, and practicality. In addition, even the output of various algorithms can be quite different. For example, some algorithms only produce a sparse 3-D reconstruction while others are able to output a dense reconstruction. The selection of the appropriate 3-D reconstruction algorithm relies heavily on the intended application as well as the available resources. The goal of this paper is to review some of the commonly used motion-parallax-based 3-D reconstruction techniques and make clear the assumptions under which they are designed. To do so efficiently, we classify the reviewed reconstruction algorithms into two large categories depending on whether a prior calibration of the camera is required. Under each category, related algorithms are further grouped according to the common properties they share.",
"title": ""
},
{
"docid": "ec40606c46cc1bd3e1d4c64793a8ca83",
"text": "Thin-layer chromatography (TLC) and liquid chromatography (LC) methods were developed for the qualitative and quantitative determination of agrimoniin, pedunculagin, ellagic acid, gallic acid, and catechin in selected herbal medicinal products from Rosaceae: Anserinae herba, Tormentillae rhizoma, Alchemillae herba, Agrimoniae herba, and Fragariae folium. Unmodified silica gel (TLC Si60, HPTLC LiChrospher Si60) and silica gel chemically modified with octadecyl or aminopropyl groups (HPTLC RP18W and HPTLC NH2) were used for TLC. The best resolution and selectivity were achieved with the following mobile phases: diisopropyl ether-acetone-formic acid-water (40 + 30 + 20 + 10, v/v/v/v), tetrahydrofuran-acetonitrile-water (30 + 10 + 60, v/v/v), and acetone-formic acid (60 + 40, v/v). Concentrations of the studied herbal drugs were determined by using a Chromolith Performance RP-18e column with acetonitrile-water-formic acid as the mobile phase. Determinations of linearity, range, detection and quantitation limits, accuracy, precision, and robustness showed that the HPLC method was sufficiently precise for estimation of the tannins and related polyphenols mentioned above. Investigations of suitable solvent selection, sample extraction procedure, and short-time stability of analytes at storage temperatures of 4 and 20 degrees C were also performed. The percentage of agrimoniin in pharmaceutical products was between 0.57 and 3.23%.",
"title": ""
},
{
"docid": "de67aeb2530695bcc6453791a5fa8c77",
"text": "Sebaceous carcinoma is a rare adenocarcinoma with variable degrees of sebaceous differentiation, most commonly found on periocular skin, but also occasionally occur extraocular. It can occur in isolation or as part of the MuirTorre syndrome. Sebaceous carcinomas are yellow or red nodules or plaques often with a friable surface, ulceration, or crusting. On histological examination, sebaceous carcinomas are typically poorly circumscribed, asymmetric, and infiltrative. Individual cells are pleomorphic with atypical nuclei, mitoses, and a coarsely vacuolated cytoplasm.",
"title": ""
},
{
"docid": "374b87b187fbc253477cd1e8f60e9d91",
"text": "Term Used Definition Provided Source I/T strategy None provided Henderson and Venkatraman 1999 Information Management Strategy \" A long-term precept for directing, implementing and supervising information management \" (information management left undefined) Reponen 1994 (p. 30) \" Deals with management of the entire information systems function, \" referring to Earl (1989, p. 117): \" the management framework which guides how the organization should run IS/IT activities \" Ragu-Nathan et al. 2001 (p. 269)",
"title": ""
},
{
"docid": "5a08b007fbe1a424f9788ea68ec47d80",
"text": "We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset’s geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.",
"title": ""
},
{
"docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5",
"text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.",
"title": ""
},
{
"docid": "e389bed063035d3e9160d3136d2729a0",
"text": "We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort) the committed value without the help of the committer. An important application of our timed-commitment scheme is contract signing: two mutually suspicious parties wish to exchange signatures on a contract. We show a two-party protocol that allows them to exchange RSA or Rabin signatures. The protocol is strongly fair: if one party quits the protocol early, then the two parties must invest comparable amounts of time to retrieve the signatures. This statement holds even if one party has many more machines than the other. Other applications, including honesty preserving auctions and collective coin-flipping, are discussed.",
"title": ""
},
{
"docid": "c955e63d5c5a30e18c008dcc51d1194b",
"text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.",
"title": ""
},
{
"docid": "760a02c6205b2e2e38d14fa91708c508",
"text": "The popular h-index used to measure scientific output can be described in terms of a pool of evaluated objects (the papers), a quality function on the evaluated objects (the number of citations received by each paper) and a sentencing line crossing the origin, whose intersection with the graph of the quality function yields the index value (in the h-index this is a line with slope 1). Based on this abstraction, we present a new index, the c-index, in which the evaluated objects are the citations received by an author, a group of authors, a journal, etc., the quality function of a citation is the collaboration distance between the authors of the cited and the citing papers, and the sentencing line can take slopes between 0 and ∞. As a result, the new index counts only those citations which are significant enough, where significance is proportional to collaboration distance. Several advantages of the new c-index with respect to previous proposals are discussed.",
"title": ""
},
{
"docid": "4ceab082d195c1f69bb98793852f4a29",
"text": "This paper presents a 22 to 26.5 Gb/s optical receiver with an all-digital clock and data recovery (AD-CDR) fabricated in a 65 nm CMOS process. The receiver consists of an optical front-end and a half-rate bang-bang clock and data recovery circuit. The optical front-end achieves low power consumption by using inverter-based amplifiers and realizes sufficient bandwidth by applying several bandwidth extension techniques. In addition, in order to minimize additional jitter at the front-end, not only magnitude and bandwidth but also group-delay responses are considered. The AD-CDR employs an LC quadrature digitally controlled oscillator (LC-QDCO) to achieve a high phase noise figure-of-merit at tens of gigahertz. The recovered clock jitter is 1.28 ps rms and the measured jitter tolerance exceeds the tolerance mask specified in IEEE 802.3ba. The receiver sensitivity is 106 and 184 for a bit error rate of 10-12 at data rates of 25 and 26.5 Gb/s, respectively. The entire receiver chip occupies an active die area of 0.75 mm2 and consumes 254 mW at a data rate of 26.5 Gb/s. The energy efficiencies of the front-end and entire receiver at 26.5 Gb/s are 1.35 and 9.58 pJ/bit, respectively.",
"title": ""
},
{
"docid": "4df5ae1f7eae0c366bd5bdb30af80ad2",
"text": "Robots inevitably fail, often without the ability to recover autonomously. We demonstrate an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language. Our approach automatically detects failures, then generates targeted spoken-language requests for help such as “Please give me the white table leg that is on the black table.” Once the human partner has repaired the failure condition, the system resumes full autonomy. We present a novel inverse semantics algorithm for generating effective help requests. In contrast to forward semantic models that interpret natural language in terms of robot actions and perception, our inverse semantics algorithm generates requests by emulating the human’s ability to interpret a request using the Generalized Grounding Graph (G) framework. To assess the effectiveness of our approach, we present a corpusbased online evaluation, as well as an end-to-end user study, demonstrating that our approach increases the effectiveness of human interventions compared to static requests for help.",
"title": ""
},
{
"docid": "c2dfa94555085b6ca3b752d719688613",
"text": "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., ‘positive’ and ‘negative’. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule’s state probability. A capsule’s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules’ attributes. The words well reflect the domain specificity of the dataset. ACM Reference Format: Yequan Wang1 Aixin Sun2 Jialong Han3 Ying Liu4 Xiaoyan Zhu1. 2018. Sentiment Analysis by Capsules. InWWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3178876.3186015",
"title": ""
},
{
"docid": "a75d3395a1d4859b465ccbed8647fbfe",
"text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.",
"title": ""
},
{
"docid": "d5666bfb1fcd82ac89da2cb893ba9fb7",
"text": "Ad-servers have to satisfy many different targeting criteria, and the combination can often result in no feasible solution. We hypothesize that advertisers may be defining these metrics to create a kind of \"proxy target\". We therefore reformulate the standard ad-serving problem to one where we attempt to get as close as possible to the advertiser's multi-dimensional target inclusive of delivery. We use a simple simulation to illustrate the behavior of this algorithm compared to Constraint and Pacing strategies. The system is then deployed in one of the largest video ad-servers in the United States and we show experimental results from live test ads, as well as 6 months of production performance across hundreds of ads. We find that the live ad-server tests match the simulation, and we report significant gains in multi-KPI performance from using the error minimization strategy.",
"title": ""
},
{
"docid": "e668eddaa2cec83540a992e09e0be368",
"text": "The increasing number of attacks on internet-based systems calls for security measures on behalf those systems’ operators. Beside classical methods and tools for penetration testing, there exist additional approaches using publicly available search engines. We present an alternative approach using contactless vulnerability analysis with both classical and subject-specific search engines. Based on an extension and combination of their functionality, this approach provides a method for obtaining promising results for audits of IT systems, both quantitatively and qualitatively. We evaluate our approach and confirm its suitability for a timely determination of vulnerabilities in large-scale networks. In addition, the approach can also be used to perform vulnerability analyses of network areas or domains in unclear legal situations.",
"title": ""
},
{
"docid": "f3fc221d2d57163f43f165400b9eee02",
"text": "Article history: Received 13 March 2017 Received in revised form 19 June 2017 Accepted 4 July 2017 Available online xxxx",
"title": ""
}
] |
scidocsrr
|
55e2362d012d58ae90a1a987246593b3
|
Device Mismatch: An Analog Design Perspective
|
[
{
"docid": "df374fcdaf0b7cd41ca5ef5932378655",
"text": "This paper is concerned with the design of precision MOS anafog circuits. Section ff of the paper discusses the characterization and modeling of mismatch in MOS transistors. A characterization methodology is presented that accurately predicts the mismatch in drain current over a wide operating range using a minimumset of measured data. The physical causes of mismatch are discussed in detail for both pand n-channel devices. Statistieal methods are used to develop analytical models that relate the mismatchto the devicedimensions.It is shownthat these models are valid for smafl-geometrydevices also. Extensive experimental data from a 3-pm CMOS process are used to verify these models. Section 111of the paper demonstrates the applicationof the transistor matching studies to the design of a high-performance digital-to-analog converter (DAC). A circuit designmethodologyis presented that highfights the close interaction between the circuit yield and the matching accuracy of devices. It has been possibleto achievea circuit yieldof greater than 97 percent as a result of the knowledgegenerated regarding the matching behavior of transistors and due to the systematicdesignapproach.",
"title": ""
}
] |
[
{
"docid": "79560f7ec3c5f42fe5c5e0ad175fe6a0",
"text": "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.",
"title": ""
},
{
"docid": "8d6171dbe50a25873bd435ad25e48ae9",
"text": "An automatic landing system is required on a long-range drone because the position of the vehicle cannot be reached visually by the pilot. The autopilot system must be able to correct the drone movement dynamically in accordance with its flying altitude. The current article describes autopilot system on an H-Octocopter drone using image processing and complementary filter. This paper proposes a new approach to reduce oscillations during the landing phase on a big drone. The drone flies above 10 meters to a provided coordinate using GPS data, to check for the existence of the landing area. This process is done visually using the camera. PID controller is used to correct the movement by calculate error distance detected by camera. The controller also includes altitude parameters on its calculations through a complementary filter. The controller output is the PWM signals which control the movement and altitude of the vehicle. The signal then transferred to Flight Controller through serial communication, so that, the drone able to correct its movement. From the experiments, the accuracy is around 0.56 meters and it can be done in 18 seconds.",
"title": ""
},
{
"docid": "86318b52b1bdf0dcf64a2d067645237b",
"text": "Neurons that fire high-frequency bursts of spikes are found in various sensory systems. Although the functional implications of burst firing might differ from system to system, bursts are often thought to represent a distinct mode of neuronal signalling. The firing of bursts in response to sensory input relies on intrinsic cellular mechanisms that work with feedback from higher centres to control the discharge properties of these cells. Recent work sheds light on the information that is conveyed by bursts about sensory stimuli, on the cellular mechanisms that underlie bursting, and on how feedback can control the firing mode of burst-capable neurons, depending on the behavioural context. These results provide strong evidence that bursts have a distinct function in sensory information transmission.",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "6a4a76e48ff8bfa9ad17f116c3258d49",
"text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.",
"title": ""
},
{
"docid": "1ffef8248a0cc0b69a436c4d949ed221",
"text": "This paper presents preliminary research on a new decision making tool that integrates financial and non-financial performance measures in project portfolio management via the Triple Bottom Line (TBL) and uses the Analytic Hierarchy Process (AHP) as a decision support model. This new tool evaluates and prioritizes a set of projects and creates a balanced project portfolio based upon the perspectives and priorities of decision makers. It can assist decision makers with developing and making proactive decisions which support the strategy of their organization with respect to financial, environmental and social issues, ensuring the sustainability of their organization in the future.",
"title": ""
},
{
"docid": "fd8b7b9f4469bd253ee66f6c464691a6",
"text": "The \"flipped classroom\" is a learning model in which content attainment is shifted forward to outside of class, then followed by instructor-facilitated concept application activities in class. Current studies on the flipped model are limited. Our goal was to provide quantitative and controlled data about the effectiveness of this model. Using a quasi-experimental design, we compared an active nonflipped classroom with an active flipped classroom, both using the 5-E learning cycle, in an effort to vary only the role of the instructor and control for as many of the other potentially influential variables as possible. Results showed that both low-level and deep conceptual learning were equivalent between the conditions. Attitudinal data revealed equal student satisfaction with the course. Interestingly, both treatments ranked their contact time with the instructor as more influential to their learning than what they did at home. We conclude that the flipped classroom does not result in higher learning gains or better attitudes compared with the nonflipped classroom when both utilize an active-learning, constructivist approach and propose that learning gains in either condition are most likely a result of the active-learning style of instruction rather than the order in which the instructor participated in the learning process.",
"title": ""
},
{
"docid": "04e478610728f0aae76e5299c28da25a",
"text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.",
"title": ""
},
{
"docid": "0d7c29b40f92b5997791f1bbe192269c",
"text": "We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics – natural language captions or other labels – depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark, video summarization on the SumMe and TV-Sum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks.",
"title": ""
},
{
"docid": "47f2a5a61677330fc85ff6ac700ac39f",
"text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "b6bbd83da68fbf1d964503fb611a2be5",
"text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.",
"title": ""
},
{
"docid": "4028f1cd20127f3c6599e6073bb1974b",
"text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.",
"title": ""
},
{
"docid": "b3947afb7856b0ffd5983f293ca508b9",
"text": "High gain low profile slotted cavity with substrate integrated waveguide (SIW) is presented using TE440 high order mode. The proposed antenna is implemented to achieve 16.4 dBi high gain at 28 GHz with high radiation efficiency of 98%. Furthermore, the proposed antenna has a good radiation pattern. Simulated results using CST and HFSS software are presented and discussed. Several advantages such as low profile, low cost, light weight, small size, and easy implementation make the proposed antenna suitable for millimeter-wave wireless communications.",
"title": ""
},
{
"docid": "e3c8f10316152f0bc775f4823b79c7f6",
"text": "The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.",
"title": ""
},
{
"docid": "6b203b7a8958103b30701ac139eb1fb8",
"text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.",
"title": ""
},
{
"docid": "70f1f5de73c3a605b296299505fd4e61",
"text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.",
"title": ""
},
{
"docid": "0b9b85dc4f80e087f591f89b12bb6146",
"text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.",
"title": ""
},
{
"docid": "e32fc572acb93c65083b372a6b24e7ee",
"text": "BACKGROUND\nFemale Genital Mutilation/Cutting (FGM/C) is a harmful traditional practice with severe health complications, deeply rooted in many Sub-Saharan African countries. In The Gambia, the prevalence of FGM/C is 78.3% in women aged between 15 and 49 years. The objective of this study is to perform a first evaluation of the magnitude of the health consequences of FGM/C in The Gambia.\n\n\nMETHODS\nData were collected on types of FGM/C and health consequences of each type of FGM/C from 871 female patients who consulted for any problem requiring a medical gynaecologic examination and who had undergone FGM/C in The Gambia.\n\n\nRESULTS\nThe prevalence of patients with different types of FGM/C were: type I, 66.2%; type II, 26.3%; and type III, 7.5%. Complications due to FGM/C were found in 299 of the 871 patients (34.3%). Even type I, the form of FGM/C of least anatomical extent, presented complications in 1 of 5 girls and women examined.\n\n\nCONCLUSION\nThis study shows that FGM/C is still practiced in all the six regions of The Gambia, the most common form being type I, followed by type II. All forms of FGM/C, including type I, produce significantly high percentages of complications, especially infections.",
"title": ""
}
] |
scidocsrr
|
de19ac7723243947167a0532de5f142a
|
A Sub-nW Multi-stage Temperature Compensated Timer for Ultra-Low-Power Sensor Nodes
|
[
{
"docid": "bb542460bf9196ef1905cecdce252bf3",
"text": "Wireless sensor nodes have many compelling applications such as smart buildings, medical implants, and surveillance systems. However, existing devices are bulky, measuring >;1cm3, and they are hampered by short lifetimes and fail to realize the “smart dust” vision of [1]. Smart dust requires a mm3-scale, wireless sensor node with perpetual energy harvesting. Recently two application-specific implantable microsystems [2][3] demonstrated the potential of a mm3-scale system in medical applications. However, [3] is not programmable and [2] lacks a method for re-programming or re-synchronizing once encapsulated. Other practical issues remain unaddressed, such as a means to protect the battery during the time period between system assembly and deployment and the need for flexible design to enable use in multiple application domains.",
"title": ""
}
] |
[
{
"docid": "b85a6286ca2fb14a9255c9d70c677de3",
"text": "0140-3664/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.comcom.2013.01.009 q The research leading to these results has been conducted in the SAIL project and received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant Agreement No. 257448. ⇑ Corresponding author. Tel.: +49 5251 60 5385; fax: +49 5251 60 5377. E-mail addresses: cdannewitz@upb.de (C. Dannewitz), Dirk.Kutscher@neclab.eu (D. Kutscher), Borje.Ohlman@ericsson.com (B. Ohlman), stephen.farrell@cs.tcd.ie (S. Farrell), bengta@sics.se (B. Ahlgren), hkarl@upb.de (H. Karl). 1 <http://www.cisco.com/web/solutions/sp/vni/vni_mobile_forecast_highlights/ index.html>. Christian Dannewitz , Dirk Kutscher b,⇑, Börje Ohlman , Stephen Farrell , Bengt Ahlgren , Holger Karl a",
"title": ""
},
{
"docid": "4f84d3a504cf7b004a414346bb19fa94",
"text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.",
"title": ""
},
{
"docid": "890f459384ea47a8915a60c19a3320e3",
"text": "Product ads are a popular form of search advertizing offered by major search engines, including Yahoo, Google and Bing. Unlike traditional search ads, product ads include structured product specifications, which allow search engine providers to perform better keyword-based ad retrieval. However, the level of completeness of the product specifications varies and strongly influences the performance of ad retrieval. On the other hand, online shops are increasing adopting semantic markup languages such as Microformats, RDFa and Microdata, to annotate their content, making large amounts of product description data publicly available. In this paper, we present an approach for enriching product ads with structured data extracted from thousands of online shops offering Microdata annotations. In our approach we use structured product ads as supervision for training feature extraction models able to extract attribute-value pairs from unstructured product descriptions. We use these features to identify matching products across different online shops and enrich product ads with the extracted data. Our evaluation on three product categories related to electronics show promising results in terms of enriching product ads with useful product data.",
"title": ""
},
{
"docid": "f4dc67d810d5f104f91c8724630992cf",
"text": "Apoptosis is deregulated in many cancers, making it difficult to kill tumours. Drugs that restore the normal apoptotic pathways have the potential for effectively treating cancers that depend on aberrations of the apoptotic pathway to stay alive. Apoptosis targets that are currently being explored for cancer drug discovery include the tumour-necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) receptors, the BCL2 family of anti-apoptotic proteins, inhibitor of apoptosis (IAP) proteins and MDM2.",
"title": ""
},
{
"docid": "2579fe676b498cee60af8bda22d75e2e",
"text": "Only one late period is allowed for this homework (11:59pm 1/26). Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. This means you should submit a 15-page PDF (1 page for the cover sheet, 1 page for the answers to question 1, 5 pages for answers to question 2, 3 pages for question 3, and 5 pages for question 4). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions 1 MapReduce (25 pts) [Jeff/Sameep/Ivaylo] Write a MapReduce program in Hadoop that implements a simple \" People You Might Know \" social network friendship recommendation algorithm. The key idea is that if two people have a lot of mutual friends, then the system should recommend that they connect with each other.",
"title": ""
},
{
"docid": "e8f424ee75011e7cf9c2c3cbf5ea5037",
"text": "BACKGROUND\nEmotional distress is an increasing public health problem and Hatha yoga has been claimed to induce stress reduction and empowerment in practicing subjects. We aimed to evaluate potential effects of Iyengar Hatha yoga on perceived stress and associated psychological outcomes in mentally distressed women.\n\n\nMATERIAL/METHODS\nA controlled prospective non-randomized study was conducted in 24 self-referred female subjects (mean age 37.9+/-7.3 years) who perceived themselves as emotionally distressed. Subjects were offered participation in one of two subsequential 3-months yoga programs. Group 1 (n=16) participated in the first class, group 2 (n=8) served as a waiting list control. During the yoga course, subjects attended two-weekly 90-min Iyengar yoga classes. Outcome was assessed on entry and after 3 months by Cohen Perceived Stress Scale, State-Trait Anxiety Inventory, Profile of Mood States, CESD-Depression Scale, Bf-S/Bf-S' Well-Being Scales, Freiburg Complaint List and ratings of physical well-being. Salivary cortisol levels were measured before and after an evening yoga class in a second sample.\n\n\nRESULTS\nCompared to waiting-list, women who participated in the yoga-training demonstrated pronounced and significant improvements in perceived stress (P<0.02), State and Trait Anxiety (P<0.02 and P<0.01, respectively), well-being (P<0.01), vigor (P<0.02), fatigue (P<0.02) and depression (P<0.05). Physical well-being also increased (P<0.01), and those subjects suffering from headache or back pain reported marked pain relief. Salivary cortisol decreased significantly after participation in a yoga class (P<0.05).\n\n\nCONCLUSIONS\nWomen suffering from mental distress participating in a 3-month Iyengar yoga class show significant improvements on measures of stress and psychological outcomes. Further investigation of yoga with respect to prevention and treatment of stress-related disease and of underlying mechanism is warranted.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "941dc605dab6cf9bfe89bedb2b4f00a3",
"text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.",
"title": ""
},
{
"docid": "6d5998e5f0d5500493c7dc98c7fb28d9",
"text": "Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By projecting a suitable set of light patterns onto the surface of an object and capturing images with a camera, a large number of correspondences can be found and 3D points can be reconstructed by means of triangulation. One-shot techniques are based on projecting an unique pattern so that moving objects can be measured. A major group of techniques in this field define coloured multi-slit or stripe patterns in order to obtain dense reconstructions. The former type of patterns is suitable for locating intensity peaks in the image while the latter is aimed to locate edges. In this paper, we present a new way to design coloured stripe patterns so that both intensity peaks and edges can be located without loss of accuracy and reducing the number of hue levels included in the pattern. The results obtained by the new pattern are quantitatively and qualitatively compared to similar techniques. These results also contribute to a comparison between the peak-based and edge-based reconstruction strategies. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ed0d234b961befcffab751f70f5c5fdb",
"text": "UNLABELLED\nA challenging aspect of managing patients on venoarterial extracorporeal membrane oxygenation (V-A ECMO) is a thorough understanding of the relationship between oxygenated blood from the ECMO circuit and blood being pumped from the patient's native heart. We present an adult V-A ECMO case report, which illustrates a unique encounter with the concept of \"dual circulations.\" Despite blood gases from the ECMO arterial line showing respiratory acidosis, this patient with cardiogenic shock demonstrated regional respiratory alkalosis when blood was sampled from the right radial arterial line. In response, a sample was obtained from the left radial arterial line, which mimicked the ECMO arterial blood but was dramatically different from the blood sampled from the right radial arterial line. A retrospective analysis of patient data revealed that the mismatch of blood gas values in this patient corresponded to an increased pulse pressure. Having three arterial blood sampling sites and data on the patient's pulse pressure provided a dynamic view of blood mixing and guided proper management, which contributed to a successful patient outcome that otherwise may not have occurred. As a result of this unique encounter, we created and distributed graphics representing the concept of \"dual circulations\" to facilitate the education of ECMO specialists at our institution.\n\n\nKEYWORDS\nECMO, education, cardiopulmonary bypass, cannulation.",
"title": ""
},
{
"docid": "5aed6d1cd0036384fd09a5c5a72a9020",
"text": "We propose a method of representing audience behavior through facial and body motions from a single video stream, and use these features to predict the rating for feature-length movies. This is a very challenging problem as: i) the movie viewing environment is dark and contains views of people at different scales and viewpoints; ii) the duration of feature-length movies is long (80-120 mins) so tracking people uninterrupted for this length of time is still an unsolved problem; and iii) expressions and motions of audience members are subtle, short and sparse making labeling of activities unreliable. To circumvent these issues, we use an infrared illuminated test-bed to obtain a visually uniform input. We then utilize motion-history features which capture the subtle movements of a person within a pre-defined volume, and then form a group representation of the audience by a histogram of pair-wise correlations over a small-window of time. Using this group representation, we learn our movie rating classifier from crowd-sourced ratings collected by rottentomatoes.com and show our prediction capability on audiences from 30 movies across 250 subjects (> 50 hrs).",
"title": ""
},
{
"docid": "865ca372a2b073e672c535a94c04c2ad",
"text": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.",
"title": ""
},
{
"docid": "3188d901ab997dcabc795ad3da6af659",
"text": "This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.",
"title": ""
},
{
"docid": "b2ad81e0c7e352dac4caea559ac675bb",
"text": "A linearly polarized miniaturized printed dipole antenna with novel half bowtie radiating arm is presented for wireless applications including the 2.4 GHz ISM band. This design is approximately 0.363 λ in length at central frequency of 2.97 GHz. An integrated balun with inductive transitions is employed for wideband impedance matching without changing the geometry of radiating arms. This half bowtie dipole antenna displays 47% bandwidth, and a simulated efficiency of over 90% with miniature size. The radiation patterns are largely omnidirectional and display a useful level of measured gain across the impedance bandwidth. The size and performance of the miniaturized half bowtie dipole antenna is compared with similar reduced size antennas with respect to their overall footprint, substrate dielectric constant, frequency of operation and impedance bandwidth. This half bowtie design in this communication outperforms the reference antennas in virtually all categories.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "319ba1d449d2b65c5c58b5cc0fdbed67",
"text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness",
"title": ""
},
{
"docid": "76eef8117ac0bc5dbb0529477d10108d",
"text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).",
"title": ""
},
{
"docid": "5f8a8117ff153528518713d66c876228",
"text": "Certain human talents, such as musical ability, have been associated with left-right differences in brain structure and function. In vivo magnetic resonance morphometry of the brain in musicians was used to measure the anatomical asymmetry of the planum temporale, a brain area containing auditory association cortex and previously shown to be a marker of structural and functional asymmetry. Musicians with perfect pitch revealed stronger leftward planum temporale asymmetry than nonmusicians or musicians without perfect pitch. The results indicate that outstanding musical ability is associated with increased leftward asymmetry of cortex subserving music-related functions.",
"title": ""
},
{
"docid": "66a6e9bbdd461fa85a0a09ec1ceb2031",
"text": "BACKGROUND\nConverging evidence indicates a functional disruption in the neural systems for reading in adults with dyslexia. We examined brain activation patterns in dyslexic and nonimpaired children during pseudoword and real-word reading tasks that required phonologic analysis (i.e., tapped the problems experienced by dyslexic children in sounding out words).\n\n\nMETHODS\nWe used functional magnetic resonance imaging (fMRI) to study 144 right-handed children, 70 dyslexic readers, and 74 nonimpaired readers as they read pseudowords and real words.\n\n\nRESULTS\nChildren with dyslexia demonstrated a disruption in neural systems for reading involving posterior brain regions, including parietotemporal sites and sites in the occipitotemporal area. Reading skill was positively correlated with the magnitude of activation in the left occipitotemporal region. Activation in the left and right inferior frontal gyri was greater in older compared with younger dyslexic children.\n\n\nCONCLUSIONS\nThese findings provide neurobiological evidence of an underlying disruption in the neural systems for reading in children with dyslexia and indicate that it is evident at a young age. The locus of the disruption places childhood dyslexia within the same neurobiological framework as dyslexia, and acquired alexia, occurring in adults.",
"title": ""
}
] |
scidocsrr
|
d5df5c5860f3efe0efcaf7769609f2bb
|
Modeling and Analysis of a Dual-Active-Bridge-Isolated Bidirectional DC/DC Converter to Minimize RMS Current With Whole Operating Range
|
[
{
"docid": "a1332b94cf217fec5e3a51fe45b9ed4e",
"text": "There is large voltage deviation on the dc bus of the three-stage solid-state transformer (SST) when the load suddenly changes. The feed-forward control can effectively reduce the voltage deviation and transition time. However, conventional power feed-forward scheme of SST cannot develop the feed-forward control to the full without extra current sensor. In this letter, an energy feed-forward scheme, which takes the energy changes of inductors into consideration, is proposed for the dual active bridge (DAB) controller. A direct feed-forward scheme, which directly passes the power of DAB converter to the rectifier stage, is proposed for the rectifier controller. They can further improve the dynamic performances of the two dc bus voltages, respectively. The experimental results in a 2-kW SST prototype are provided to verify the proposed feed-forward schemes and show the superior performances.",
"title": ""
},
{
"docid": "c1a5e168f3260e70dd105310cd3fc13a",
"text": "To reduce current stress and improve efficiency of dual active bridge (DAB) dc-dc converters, various control schemes have been proposed in recent decades. Most control schemes for directly minimizing power losses from power loss modeling analysis and optimization aspect of the adopted converter are too difficult and complicated to implement in real-time digital microcontrollers. Thus, this paper focuses on a simple solution to reduce current stress and improve the efficiency of the adopted DAB converter. However, traditional current-stress-optimized (CSO) schemes have some drawbacks, such as inductance dependency and an additional load-current sensor. In this paper, a simple CSO scheme with a unified phase-shift (UPS) control, which can be equivalent to the existing conventional phase-shift controls, is proposed for DAB dc-dc converters to realize current stress optimization. The simple CSO scheme can overcome those drawbacks of traditional CSO schemes, gain the minimum current stress, and improve efficiency. Then, a comparison of single-phase-shift (SPS) control, simple CSO scheme with dual-phase-shift (CSO-DPS) control, simple CSO scheme with extended-phase-shift (CSO-EPS) control, and simple CSO scheme with UPS (CSO-UPS) control is analyzed in detail. Finally, experimental results verify the excellent performance of the proposed CSO-UPS control scheme and the correctness of theoretical analysis.",
"title": ""
}
] |
[
{
"docid": "7635ad3e2ac2f8e72811bf056d29dfbb",
"text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.",
"title": ""
},
{
"docid": "2cddde920b40a245a5e1b4b1abb2e92b",
"text": "The aim of this research was to understand what affects people's privacy preferences in smartphone apps. We ran a four-week study in the wild with 34 participants. Participants were asked to answer questions, which were used to gather their personal context and to measure their privacy preferences by varying app name and purpose of data collection. Our results show that participants shared the most when no information about data access or purpose was given, and shared the least when both of these details were specified. When just one of either purpose or the requesting app was shown, participants shared less when just the purpose was specified than when just the app name was given. We found that the purpose for data access was the predominant factor affecting users' choices. In our study the purpose condition vary from being not specified, to vague to be very specific. Participants were more willing to disclose data when no purpose was specified. When a vague purpose was shown, participants became more privacy-aware and were less willing to disclose their information. When specific purposes were shown participants were more willing to disclose when the purpose for requesting the information appeared to be beneficial to them, and shared the least when the purpose for data access was solely beneficial to developers.",
"title": ""
},
{
"docid": "432ff163e4dded948aa5a27aa440cd30",
"text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.",
"title": ""
},
{
"docid": "9b130e155ca93228ed176e5d405fd50a",
"text": "For years educators have attempted to identify the effective predictors of scholastic achievement and several personality variables were described as significantly correlated with grade performance. Since one of the crucial practical implications of identifying the factors involved in academic achievement is to facilitate the teaching-learning process, the main variables that have been associated with achievement should be investigated simultaneously in order to provide information as to their relative merit in the population examined. In contrast with this premise, limited research has been conducted on the importance of personality traits and self-esteem on scholastic achievement. To this aim in a sample of 439 subjects (225 males) with an average age of 12.36 years (SD= .99) from three first level secondary school classes of Southern Italy, personality traits, as defined by the Five Factor Model, self-esteem and socioeconomic status were evaluated. The academic results correlated significantly both with personality traits and with some dimensions of self-esteem. Moreover, hierarchical regression analyses brought to light, in particular, the predictive value of openness to experience on academic marks. The results, stressing the multidimensional nature of academic performance, indicate a need to adopt complex approaches for undertaking action addressing students’ difficulties in attaining good academic achievement.",
"title": ""
},
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "07db8fea11297fea2def9440a7d614dc",
"text": "We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains. Unsupervised domain adaptation aims to solve the real-world problem of domain shift, where machine learning models trained on one domain must be transferred and adapted to a novel visual domain without additional supervision. The VisDA2017 challenge is focused on the simulation-to-reality shift and has two associated tasks: image classification and image segmentation. The goal in both tracks is to first train a model on simulated, synthetic data in the source domain and then adapt it to perform well on real image data in the unlabeled test domain. Our dataset is the largest one to date for cross-domain object classification, with over 280K images across 12 categories in the combined training, validation and testing domains. The image segmentation dataset is also large-scale with over 30K images across 18 categories in the three domains. We compare VisDA to existing cross-domain adaptation datasets and provide a baseline performance analysis, as well as results of the challenge.",
"title": ""
},
{
"docid": "5edc557fbcf1d9a91560739058274900",
"text": "A number of technological advances have led to a renewed interest on dynamic vehicle routing problems. This survey classifies routing problems from the perspective of information quality and evolution. After presenting a general description of dynamic routing, we introduce the notion of degree of dynamism, and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems. ∗Corresponding author: gueret@mines-nantes.fr",
"title": ""
},
{
"docid": "abdd1406266d7290166eb16b8a5045a9",
"text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.",
"title": ""
},
{
"docid": "e818b0a38d17a77cc6cfdee2761f12c4",
"text": "In this paper, we present improved lane tracking using vehicle localization. Lane markers are detected using a bank of steerable filters, and lanes are tracked using Kalman filtering. On-road vehicle detection has been achieved using an active learning approach, and vehicles are tracked using a Condensation particle filter. While most state-of-the art lane tracking systems are not capable of performing in high-density traffic scenes, the proposed framework exploits robust vehicle tracking to allow for improved lane tracking in high density traffic. Experimental results demonstrate that lane tracking performance, robustness, and temporal response are significantly improved in the proposed framework, while also tracking vehicles, with minimal additional hardware requirements.",
"title": ""
},
{
"docid": "f6a5f4280a8352157164d6abc1259a45",
"text": "A new robust lane marking detection algorithm for monocular vision is proposed. It is designed for the urban roads with disturbances and with the weak lane markings. The primary contribution of the paper is that it supplies a robust adaptive method of image segmentation, which employs jointly prior knowledge, statistical information and the special geometrical features of lane markings in the bird's-eye view. This method can eliminate many disturbances while keep points of lane markings effectively. Road classification can help us extract more accurate and simple characteristics of lane markings, so the second contribution of the paper is that it uses the row information of image to classify road conditions into three kinds and uses different strategies to complete lane marking detection. The experimental results have shown the high performance of our algorithm in various road scenes.",
"title": ""
},
{
"docid": "dacb4491a0cf1e05a2972cc1a82a6c62",
"text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.",
"title": ""
},
{
"docid": "fe6fa144846269c7b2c9230ca9d8217b",
"text": "The paper is dedicated to plagiarism problem. The ways how to reduce plagiarism: both: plagiarism prevention and plagiarism detection are discussed. Widely used plagiarism detection methods are described. The most known plagiarism detection tools are analysed.",
"title": ""
},
{
"docid": "39351cdf91466aa12576d9eb475fb558",
"text": "Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior.",
"title": ""
},
{
"docid": "7a47dde6f7cc68c092922718000a807a",
"text": "In the present study k-Nearest Neighbor classification method, have been studied for economic forecasting. Due to the effects of companies’ financial distress on stakeholders, financial distress prediction models have been one of the most attractive areas in financial research. In recent years, after the global financial crisis, the number of bankrupt companies has risen. Since companies' financial distress is the first stage of bankruptcy, using financial ratios for predicting financial distress have attracted too much attention of the academics as well as economic and financial institutions. Although in recent years studies on predicting companies’ financial distress in Iran have been increased, most efforts have exploited traditional statistical methods; and just a few studies have used nonparametric methods. Recent studies demonstrate this method is more capable than other methods.",
"title": ""
},
{
"docid": "99b485dd4290c463b35867b98b51146c",
"text": "The term rhombencephalitis refers to inflammatory diseases affecting the hindbrain (brainstem and cerebellum). Rhombencephalitis has a wide variety of etiologies, including infections, autoimmune diseases, and paraneoplastic syndromes. Infection with bacteria of the genus Listeria is the most common cause of rhombencephalitis. Primary rhombencephalitis caused by infection with Listeria spp. occurs in healthy young adults. It usually has a biphasic time course with a flu-like syndrome, followed by brainstem dysfunction; 75% of patients have cerebrospinal fluid pleocytosis, and nearly 100% have an abnormal brain magnetic resonance imaging scan. However, other possible causes of rhombencephalitis must be borne in mind. In addition to the clinical aspects, the patterns seen in magnetic resonance imaging can be helpful in defining the possible cause. Some of the reported causes of rhombencephalitis are potentially severe and life threatening; therefore, an accurate initial diagnostic approach is important to establishing a proper early treatment regimen. This pictorial essay reviews the various causes of rhombencephalitis and the corresponding magnetic resonance imaging findings, by describing illustrative confirmed cases.",
"title": ""
},
{
"docid": "dc096631d6412e06f305f83b2c8734bc",
"text": "Many important search tasks require multiple search sessions to complete. Tasks such as travel planning, large purchases, or job searches can span hours, days, or even weeks. Inevitably, life interferes, requiring the searcher either to recover the \"state\" of the search manually (most common), or plan for interruption in advance (unlikely). The goal of this work is to better understand, characterize, and automatically detect search tasks that will be continued in the near future. To this end, we analyze a query log from the Bing Web search engine to identify the types of intents, topics, and search behavior patterns associated with long-running tasks that are likely to be continued. Using our insights, we develop an effective prediction algorithm that significantly outperforms both the previous state-of-the-art method, and even the ability of human judges, to predict future task continuation. Potential applications of our techniques would allow a search engine to pre-emptively \"save state\" for a searcher (e.g., by caching search results), perform more targeted personalization, and otherwise better support the searcher experience for interrupted search tasks.",
"title": ""
},
{
"docid": "22ad9bc66f0a9274fcf76697152bab4d",
"text": "We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a “non-lifting” relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is basis pursuit, which implies that the phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.",
"title": ""
},
{
"docid": "d31ff1d528902c72727a8a3946089b9e",
"text": "Small Manufacturing Entities (SMEs) have not incorporated robotic automation as readily as large companies due to rapidly changing product lines, complex and dexterous tasks, and the high cost of start-up. While recent low-cost robots such as the Universal Robots UR5 and Rethink Robotics Baxter are more economical and feature improved programming interfaces, based on our discussions with manufacturers further incorporation of robots into the manufacturing work flow is limited by the ability of these systems to generalize across tasks and handle environmental variation. Our goal is to create a system designed for small manufacturers that contains a set of capabilities useful for a wide range of tasks, is both powerful and easy to use, allows for perceptually grounded actions, and is able to accumulate, abstract, and reuse plans that have been taught. We present an extension to Behavior Trees that allows for representing the system capabilities of a robot as a set of generalizable operations that are exposed to an end-user for creating task plans. We implement this framework in CoSTAR, the Collaborative System for Task Automation and Recognition, and demonstrate its effectiveness with two case studies. We first perform a complex tool-based object manipulation task in a laboratory setting. We then show the deployment of our system in an SME where we automate a machine tending task that was not possible with current off the shelf robots.",
"title": ""
},
{
"docid": "1b421293cc38eec47c94754cd5e244ff",
"text": "We study the problem of hypothesis testing between two discrete distributions, where we only have access to samples after the action of a known reversible Markov chain, playing the role of noise. We derive instance-dependent minimax rates for the sample complexity of this problem, and show how its dependence in time is related to the spectral properties of the Markov chain. We show that there exists a wide statistical window, in terms of sample complexity for hypothesis testing between different pairs of initial distributions. We illustrate these results in several concrete examples.",
"title": ""
}
] |
scidocsrr
|
1359acc6067a96e49ce77cb3225268a0
|
Building book inventories using smartphones
|
[
{
"docid": "a7c330c9be1d7673bfff43b0544db4ea",
"text": "The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index.",
"title": ""
},
{
"docid": "973cb430e42b76a041a0f1f3315d700b",
"text": "A growing number of mobile computing applications are centered around the user's location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts in recognizing logical locations. This paper argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. We postulate that ambient sound, light, and color in a place convey a photo-acoustic signature that can be sensed by the phone's camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user-motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close. We propose SurroundSense, a mobile phone based system that explores logical localization via ambience fingerprinting. Evaluation results from 51 different stores show that SurroundSense can achieve an average accuracy of 87% when all sensing modalities are employed. We believe this is an encouraging result, opening new possibilities in indoor localization.",
"title": ""
},
{
"docid": "3982c66e695fdefe36d8d143247add88",
"text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"title": ""
}
] |
[
{
"docid": "2b493739c1012115b0800d047ab917a9",
"text": "Since developer ability is recognized as a determinant of better software project performance, it is a critical step to model and evaluate the programming ability of developers. However, most existing approaches require manual assessment, like 360 degree performance evaluation. With the emergence of social networking sites such as StackOverflow and Github, a vast amount of developer information is created on a daily basis. Such personal and social context data has huge potential to support automatic and effective developer ability evaluation. In this paper, we propose CPDScorer, a novel approach to modeling and scoring the programming ability of developer through mining heterogeneous information from both Community Question Answering (CQA) sites and Open-Source Software (OSS) communities. CPDScorer analyzes the answers posted in CQA sites and evaluates the projects submitted in OSS communities to assign expertise scores to developers, considering both the quantitative and qualitative factors. When modeling the programming ability of developer, a programming ability term extraction algorithm is also designed based on set covering. We have conducted experiments on StackOverflow and Github to measure the effectiveness of CPDScorer. The results show that our approach is feasible and practical in user programming ability modeling. In particular, the precision of our approach reaches 80%.",
"title": ""
},
{
"docid": "0de1e9759b4c088a15d84a108ba21c33",
"text": "MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework’s fault-tolerance guarantees. This paper describes MillWheel’s programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel’s features are used. MillWheel’s programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel’s unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google.",
"title": ""
},
{
"docid": "cd39810e2ddea52c003b832af8ef30aa",
"text": "Millions of users worldwide resort to mobile VPN clients to either circumvent censorship or to access geo-blocked content, and more generally for privacy and security purposes. In practice, however, users have little if any guarantees about the corresponding security and privacy settings, and perhaps no practical knowledge about the entities accessing their mobile traffic.\n In this paper we provide a first comprehensive analysis of 283 Android apps that use the Android VPN permission, which we extracted from a corpus of more than 1.4 million apps on the Google Play store. We perform a number of passive and active measurements designed to investigate a wide range of security and privacy features and to study the behavior of each VPN-based app. Our analysis includes investigation of possible malware presence, third-party library embedding, and traffic manipulation, as well as gauging user perception of the security and privacy of such apps. Our experiments reveal several instances of VPN apps that expose users to serious privacy and security vulnerabilities, such as use of insecure VPN tunneling protocols, as well as IPv6 and DNS traffic leakage. We also report on a number of apps actively performing TLS interception. Of particular concern are instances of apps that inject JavaScript programs for tracking, advertising, and for redirecting e-commerce traffic to external partners.",
"title": ""
},
{
"docid": "66ce4b486893e17e031a96dca9022ade",
"text": "Product reviews possess critical information regarding customers’ concerns and their experience with the product. Such information is considered essential to firms’ business intelligence which can be utilized for the purpose of conceptual design, personalization, product recommendation, better customer understanding, and finally attract more loyal customers. Previous studies of deriving useful information from customer reviews focused mainly on numerical and categorical data. Textual data have been somewhat ignored although they are deemed valuable. Existing methods of opinion mining in processing customer reviews concentrates on counting positive and negative comments of review writers, which is not enough to cover all important topics and concerns across different review articles. Instead, we propose an automatic summarization approach based on the analysis of review articles’ internal topic structure to assemble customer concerns. Different from the existing summarization approaches centered on sentence ranking and clustering, our approach discovers and extracts salient topics from a set of online reviews and further ranks these topics. The final summary is then generated based on the ranked topics. The experimental study and evaluation show that the proposed approach outperforms the peer approaches, i.e. opinion mining and clustering-summarization, in terms of users’ responsiveness and its ability to discover the most important topics. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "61a6efb791fbdabfa92448cf39e17e8c",
"text": "This work deals with the design of a wideband microstrip log periodic array operating between 4 and 18 GHz (thus working in C,X and Ku bands). A few studies, since now, have been proposed but they are significantly less performing and usually quite complicated. Our solution is remarkably simple and shows both SWR and gain better than likely structures proposed in the literature. The same antenna can also be used as an UWB antenna. The design has been developed using CST MICROWAVE STUDIO 2009, a general purpose and specialist tool for the 3D electromagnetic simulation of microwave high frequency components.",
"title": ""
},
{
"docid": "9584909fc62cca8dc5c9d02db7fa7e5d",
"text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.",
"title": ""
},
{
"docid": "e43cb8fefc7735aeab0fa40ad44a2e15",
"text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.",
"title": ""
},
{
"docid": "d896277dfe38400c9e74b7366ad93b6d",
"text": "This work is primarily focused on the design and development of an efficient and cost effective solar photovoltaic generator (PVG) based water pumping system implying a switched reluctance motor (SRM) drive. The maximum extraction of available power from PVG is attained by introducing an incremental conductance (IC) maximum power point tracking (MPPT) controller with Landsman DC-DC converter as a power conditioning stage. The CCM (continuous conduction mode) operation of Landsman DC-DC converter helps to reduce the current and voltage stress on its components and to realize the DC-DC conversion ratio independent of the load. The efficient utilization of SPV array and limiting the high initial inrush current in the motor drive is the primary concern of a Landsman converter. The inclusion of start-up control algorithm in the motor drive system facilitates the smooth self-starting of an 8/6 SRM drive. A novel approach to regulate the speed of the motor-pump system by controlling the DC link voltage of split capacitors converter helps in eliminating the voltage or current sensors required for speed control of SRM drive. The electronic commutated operation of mid-point converter considerably reduces its switching losses. This topology is designed and modeled in Matlab/Simulink platform and a laboratory prototype is developed to validate its performance under varying environmental conditions.",
"title": ""
},
{
"docid": "ce688082bc214936aff5c165ffb30c8d",
"text": "In this chapter, we review a few important concepts from Grid computing related to scheduling problems and their resolution using heuristic and meta-heuristic approaches. Scheduling problems are at the heart of any Grid-like computational system. Different types of scheduling based on different criteria, such as static vs. dynamic environment, multi-objectivity, adaptivity, etc., are identified. Then, heuristics and meta-heuristics methods for scheduling in Grids are presented. The chapter reveals the complexity of the scheduling problem in Computational Grids when compared to scheduling in classical parallel and distributed systems and shows the usefulness of heuristics and meta-heuristics approaches for the design of efficient Grid schedulers.",
"title": ""
},
{
"docid": "d03b46fc0afac5cae1e69e3f6048b478",
"text": "One of the crucial tasks of Critical Discourse Analysis (CDA) is to account for the relationships between discourse and social power. More specifically, such an analysis should describe and explain how power abuse is enacted, reproduced or legitimised by the text and talk of dominant groups or institutions. Within the framework of such an account of discursively mediated dominance and inequality this chapter focuses on an important dimension of such dominance, that is, patterns of access to discourse. A critical analysis of properties of access to public discourse and communication presupposes insight into more general political, sociocultural and economic aspects of dominance. This chapter merely gives a succinct summary of this broader conceptual framework. Leaving acide a detailed discussion of numerous philosophical and theoretical complexities, the major presuppositions of this framework are, for example, the following (see, e.g., Clegg, 1989; Lukes, 1974; 1986; Wrong, 1979):",
"title": ""
},
{
"docid": "1d0ca65e3019850f25445c4c2bbaf75d",
"text": "Cyber-physical systems are deeply intertwined with their corresponding environment through sensors and actuators. To avoid severe accidents with surrounding objects, testing the the behavior of such systems is crucial. Therefore, this paper presents the novel SMARDT (Specification Methodology Applicable to Requirements, Design, and Testing) approach to enable automated test generation based on the requirement specification and design models formalized in SysML. This paper presents and applies the novel SMARDT methodology to develop a self-adaptive software architecture dealing with controlling, planning, environment understanding, and parameter tuning. To formalize our architecture we employ a recently introduced homogeneous model-driven approach for component and connector languages integrating features indispensable in the cyber-physical systems domain. In a compelling case study we show the model driven design of a self-adaptive vehicle robot based on a modular and extensible architecture.",
"title": ""
},
{
"docid": "98c72706e0da844c80090c1ed5f3abeb",
"text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.",
"title": ""
},
{
"docid": "b676952c75749bb69efbd250f4a1ca61",
"text": "A discrete-event simulation model that imitates most on-track events, including car failures, passing manoeuvres and pit stops during a Formula One race, is presented. The model is intended for use by a specific team. It will enable decision-makers to plan and evaluate their race strategy, consequently providing them with a possible competitive advantage. The simulation modelling approach presented in this paper captures the mechanical complexities and physical interactions of a race car with its environment through a time-based approach. Model verification and validation are demonstrated using three races from the 2005 season. The application of the model is illustrated by evaluating the race strategies employed by a specific team during these three races. Journal of the Operational Research Society (2009) 60, 952–961. doi:10.1057/palgrave.jors.2602626 Published online 9 July 2008",
"title": ""
},
{
"docid": "ddfff31acb8d3302de5b11c76f06e839",
"text": "Illegal migration as well as wildfires constitute commonplace situations in southern European countries, where the mountainous terrain and thick forests make the surveillance and location of these incidents a tall task. This territory could benefit from Unmanned Aerial Vehicles (UAVs) equipped with optical and thermal sensors in conjunction with sophisticated image processing and computer vision algorithms, in order to detect suspicious activity or prevent the spreading of a fire. Taking into account that the flight height is about to two kilometers, human and fire detection algorithms are mainly based on blob detection. For both processes thermal imaging is used in order to improve the accuracy of the algorithms, while in the case of human recognition information like movement patterns as well as shadow size and shape are also considered. For fire detection a blob detector is utilized in conjunction with a color based descriptor, applied to thermal and optical images, respectively. Unlike fire, human detection is a more demanding process resulting in a more sophisticated and complex algorithm. The main difficulty of human detection originates from the high flight altitude. In images taken from high altitude where the ground sample distance is not small enough, people appear as small blobs occupying few pixels, leading corresponding research works to be based on blob detectors to detect humans. Their shadows as well as motion detection and object tracking can then be used to determine whether these regions of interest do depict humans. This work follows this motif as well, nevertheless, its main novelty lies in the fact that the human detection process is adapted for high altitude and vertical shooting images in contrast with the majority of other similar works where lower altitudes and different shooting angles are considered. Additionally, in the interest of making our algorithms as fast as possible in order for them to be used in real time during the UAV flights, parallel image processing with the help of a specialized hardware device based on Field Programmable Gate Array (FPGA) is being worked on.",
"title": ""
},
{
"docid": "fc97e17c5c9e1ea43570d799ac1ecd1f",
"text": "OBJECTIVE\nTo determine the clinical course in dogs with aural cholesteatoma.\n\n\nSTUDY DESIGN\nCase series.\n\n\nANIMALS\nDogs (n=20) with aural cholesteatoma.\n\n\nMETHODS\nCase review (1998-2007).\n\n\nRESULTS\nTwenty dogs were identified. Clinical signs other than those of chronic otitis externa included head tilt (6 dogs), unilateral facial palsy (4), pain on opening or inability to open the mouth (4), and ataxia (3). Computed tomography (CT) was performed in 19 dogs, abnormalities included osteoproliferation (13 dogs), lysis of the bulla (12), expansion of the bulla (11), bone lysis in the squamous or petrosal portion of the temporal bone (4) and enlargement of associated lymph nodes (7). Nineteen dogs had total ear canal ablation-lateral bulla osteotomy or ventral bulla osteotomy with the intent to cure; 9 dogs had no further signs of middle ear disease whereas 10 had persistent or recurrent clinical signs. Risk factors for recurrence after surgery were inability to open the mouth or neurologic signs on admission and lysis of any portion of the temporal bone on CT imaging. Dogs admitted with neurologic signs or inability to open the mouth had a median survival of 16 months.\n\n\nCONCLUSIONS\nEarly surgical treatment of aural cholesteatoma may be curative. Recurrence after surgery is associated with advanced disease, typically indicated by inability to open the jaw, neurologic disease, or bone lysis on CT imaging.\n\n\nCLINICAL RELEVANCE\nPresence of aural cholesteatoma may affect the prognosis for successful surgical treatment of middle ear disease.",
"title": ""
},
{
"docid": "5828308d458a1527f651d638375f3732",
"text": "We conducted a mixed methods study of the use of the Meerkat and Periscope apps for live streaming video and audio broadcasts from a mobile device. We crowdsourced a task to describe the content, setting, and other characteristics of 767 live streams. We also interviewed 20 frequent streamers to explore their motivations and experiences. Together, the data provide a snapshot of early live streaming use practices. We found a diverse range of activities broadcast, which interviewees said were used to build their personal brand. They described live streaming as providing an authentic, unedited view into their lives. They liked how the interaction with viewers shaped the content of their stream. We found some evidence for multiple live streams from the same event, which represent an opportunity for multiple perspectives on events of shared public interest.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "d814a42313d2d42d0cd20c5b484806ff",
"text": "This paper compares Ad hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Wireless Routing Protocol (WRP) for MANETs to Distance Vector protocol to better understand the major characteristics of the three routing protocols, using a parallel discrete event-driven simulator, GloMoSim. MANET (mobile ad hoc network) is a multi-hop wireless network without a fixed infrastructure. There has not been much work that compares the performance of the MANET routing protocols, especially to Distance Vector protocol, which is a general routing protocol developed for legacy wired networks. The results of our experiments brought us nine key findings. Followings are some of our key findings: (1) AODV is most sensitive to changes in traffic load in the messaging overhead for routing. The number of control packets generated by AODV became 36 times larger when the traffic load was increased. For Distance Vector, WRP and DSR, their increase was approximately 1.3 times, 1.1 times and 7.6 times, respectively. (2) Two advantages common in the three MANET routing protocols compared to classical Distance Vector protocol were identified to be scalability for node mobility in end-to-end delay and scalability for node density in messaging overhead. (3) WRP resulted in the shortest delay and highest packet delivery rate, implying that WRP will be the best for real-time applications in the four protocols compared. WRP demonstrated the best traffic-scalability; control overhead will not increase much when traffic load increases.",
"title": ""
},
{
"docid": "6bdf0850725f091fea6bcdf7961e27d0",
"text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.",
"title": ""
}
] |
scidocsrr
|
e21fd0d1c614d69bf0aa58088f4c67bb
|
Face Recognition Algorithms
|
[
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
}
] |
[
{
"docid": "11f84f99de269ca5ca43fc6d761504b7",
"text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons",
"title": ""
},
{
"docid": "fd5efb029ab7f69f73a97f567ac9aa1a",
"text": "Current offshore wind farms (OWFs) design processes are based on a sequential approach which does not guarantee system optimality because it oversimplifies the problem by discarding important interdependencies between design aspects. This article presents a framework to integrate, automate and optimize the design of OWF layouts and the respective electrical infrastructures. The proposed framework optimizes simultaneously different goals (e.g., annual energy delivered and investment cost) which leads to efficient trade-offs during the design phase, e.g., reduction of wake losses vs collection system length. Furthermore, the proposed framework is independent of economic assumptions, meaning that no a priori values such as the interest rate or energy price, are needed. The proposed framework was applied to the Dutch Borssele areas I and II. A wide range of OWF layouts were obtained through the optimization framework. OWFs with similar energy production and investment cost as layouts designed with standard sequential strategies were obtained through the framework, meaning that the proposed framework has the capability to create different OWF layouts that would have been missed by the designers. In conclusion, the proposed multi-objective optimization framework represents a mind shift in design tools for OWFs which allows cost savings in the design and operation phases.",
"title": ""
},
{
"docid": "cff8ae2635684a6f0e07142175b7fbf1",
"text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.",
"title": ""
},
{
"docid": "bfa659ff24af7c319702a6a8c0c7dca3",
"text": "In this letter, a grounded coplanar waveguide-to-microstrip (GCPW-to-MS) transition without via holes is presented. The transition is designed on a PET® substrate and fabricated using inkjet printing technology. To our knowledge, fabrication of transitions using inkjet printing technology has not been reported in the literature. The simulations have been performed using HFSS® software and the measurements have been carried out using a Vector Network Analyzer on a broad frequency band from 40 to 85 GHz. The effect of varying several geometrical parameters of the GCPW-to-MS on the electromagnetic response is also presented. The results obtained demonstrate good characteristics of the insertion loss better than 1.5 dB, and return loss larger than 10 dB in the V-band (50-75 GHz). Such transitions are suitable for characterization of microwave components built on different flexible substrates.",
"title": ""
},
{
"docid": "4cb34eda6145a8ea0ccc22b3e547b5e5",
"text": "The factors that contribute to individual differences in the reward value of cute infant facial characteristics are poorly understood. Here we show that the effect of cuteness on a behavioural measure of the reward value of infant faces is greater among women reporting strong maternal tendencies. By contrast, maternal tendencies did not predict women's subjective ratings of the cuteness of these infant faces. These results show, for the first time, that the reward value of infant facial cuteness is greater among women who report being more interested in interacting with infants, implicating maternal tendencies in individual differences in the reward value of infant cuteness. Moreover, our results indicate that the relationship between maternal tendencies and the reward value of infant facial cuteness is not due to individual differences in women's ability to detect infant cuteness. This latter result suggests that individual differences in the reward value of infant cuteness are not simply a by-product of low-cost, functionless biases in the visual system.",
"title": ""
},
{
"docid": "7f48835a746d23edbdaa410800d0d322",
"text": "Nager syndrome, or acrofacial dysostosis type 1 (AFD1), is a rare multiple malformation syndrome characterized by hypoplasia of first and second branchial arches derivatives and appendicular anomalies with variable involvement of the radial/axial ray. In 2012, AFD1 has been associated with dominant mutations in SF3B4. We report a 22-week-old fetus with AFD1 associated with diaphragmatic hernia due to a previously unreported SF3B4 mutation (c.35-2A>G). Defective diaphragmatic development is a rare manifestation in AFD1 as it is described in only 2 previous cases, with molecular confirmation in 1 of them. Our molecular finding adds a novel pathogenic splicing variant to the SF3B4 mutational spectrum and contributes to defining its prenatal/fetal phenotype.",
"title": ""
},
{
"docid": "f3c8158351811c2c9fc0ff2a128d35e0",
"text": "A new feather mite species, Picalgoides giganteus n. sp. (Psoroptoididae: Pandalurinae), is described from the tawny-throated leaftosser Sclerurus mexicanus Sclater (Passeriformes: Furnariidae) in Costa Rica. Among the 10 species of Picalgoides Černý, 1974, including the new one, this is the third recorded from a passerine host; the remaining seven nominal species are associated with hosts of the order Piciformes. Brief data on the host-parasite associations of Picalgoides spp. are provided. Megninia megalixus Trouessart, 1885 from the short-tailed green magpie Cissa thalassina (Temminck) is transferred to Picalgoides as P. megalixus (Trouessart, 1885) n. comb.",
"title": ""
},
{
"docid": "d98b97dae367d57baae6b0211c781d66",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "17fde1b7ed30db50790192ea03de2dd1",
"text": "Parsing for clothes in images and videos is a critical step towards understanding the human appearance. In this work, we propose a method to segment clothes in settings where there is no restriction on number and type of clothes, pose of the person, viewing angle, occlusion and number of people. This is a challenging task as clothes, even of the same category, have large variations in color and texture. The presence of human joints is the best indicator for cloth types as most of the clothes are consistently worn around the joints. We incorporate the human joint prior by estimating the body joint distributions using the detectors and learning the cloth-joint co-occurrences of different cloth types with respect to body joints. The cloth-joint and cloth-cloth co-occurrences are used as a part of the conditional random field framework to segment the image into different clothing. Our results indicate that we have outperformed the recent attempt [16] on H3D [3], a fairly complex dataset.",
"title": ""
},
{
"docid": "77a1198ac77a385ef80f5fb0accd1a59",
"text": "An enterprise resource planning system (ERP) is the information backbone of a company that integrates and automates all business operations. It is a critical issue to select the suitable ERP system which meets all the business strategies and the goals of the company. This study presents an approach to select a suitable ERP system for textile industry. Textile companies have some difficulties to implement ERP systems such as variant structure of products, production variety and unqualified human resources. At first, the vision and the strategies of the organization are checked by using balanced scorecard. According to the company’s vision, strategies and KPIs, we can prepare a request for proposal. Then ERP packages that do not meet the requirements of the company are eliminated. After strategic management phase, the proposed methodology gives advice before ERP selection. The criteria were determined and then compared according to their importance. The rest ERP system solutions were selected to evaluate. An external evaluation team consisting of ERP consultants was assigned to select one of these solutions according to the predetermined criteria. In this study, the fuzzy analytic hierarchy process, a fuzzy extension of the multi-criteria decision-making technique AHP, was used to compare these ERP system solutions. The methodology was applied for a textile manufacturing company. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dabcbdf63b15dff1153aad4b06303269",
"text": "In this chapter we present an overview of Web personalization process viewed as an application of data mining requiring support for all the phases of a typical data mining cycle. These phases include data collection and preprocessing, pattern discovery and evaluation, and finally applying the discovered knowledge in real-time to mediate between the user and the Web. This view of the personalization process provides added flexibility in leveraging multiple data sources and in effectively using the discovered models in an automatic personalization system. The chapter provides a detailed discussion of a host of activities and techniques used at different stages of this cycle, including the preprocessing and integration of data from multiple sources, as well as pattern discovery techniques that are typically applied to this data. We consider a number of classes of data mining algorithms used particularly for Web personalization, including techniques based on clustering, association rule discovery, sequential pattern mining, Markov models, and probabilistic mixture and hidden (latent) variable models. Finally, we discuss hybrid data mining frameworks that leverage data from a variety of channels to provide more effective personalization solutions.",
"title": ""
},
{
"docid": "57ff834b30f5e0f31c3382fed9c2a8ee",
"text": "Today's vehicles are becoming cyber-physical systems that not only communicate with other vehicles but also gather various information from hundreds of sensors within them. These developments help create smart and connected (e.g., self-driving) vehicles that will introduce significant information to drivers, manufacturers, insurance companies, and maintenance service providers for various applications. One such application that is becoming crucial with the introduction of self-driving cars is forensic analysis of traffic accidents. The utilization of vehicle-related data can be instrumental in post-accident scenarios to discover the faulty party, particularly for self-driving vehicles. With the opportunity of being able to access various information in cars, we propose a permissioned blockchain framework among the various elements involved to manage the collected vehicle-related data. Specifically, we first integrate vehicular public key infrastructure (VPKI) to the proposed blockchain to provide membership establishment and privacy. Next, we design a fragmented ledger that will store detailed data related to vehicles such as maintenance information/ history, car diagnosis reports, and so on. The proposed forensic framework enables trustless, traceable, and privacy-aware post-accident analysis with minimal storage and processing overhead.",
"title": ""
},
{
"docid": "30e89edb65cbf54b27115c037ee9c322",
"text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts",
"title": ""
},
{
"docid": "260b39661df5cb7ddb9c4cf7ab8a36ba",
"text": "Deblurring camera-based document image is an important task in digital document processing, since it can improve both the accuracy of optical character recognition systems and the visual quality of document images. Traditional deblurring algorithms have been proposed to work for natural-scene images. However the natural-scene images are not consistent with document images. In this paper, the distinct characteristics of document images are investigated. We propose a content-aware prior for document image deblurring. It is based on document image foreground segmentation. Besides, an upper-bound constraint combined with total variation based method is proposed to suppress the rings in the deblurred image. Comparing with the traditional general purpose deblurring methods, the proposed deblurring algorithm can produce more pleasing results on document images. Encouraging experimental results demonstrate the efficacy of the proposed method.",
"title": ""
},
{
"docid": "1bb5e01e596d09e4ff89d7cb864ff205",
"text": "A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.",
"title": ""
},
{
"docid": "43e831b69559ae228bae68b369dac2e3",
"text": "Virtualization technology enables Cloud providers to efficiently use their computing services and resources. Even if the benefits in terms of performance, maintenance, and cost are evident, however, virtualization has also been exploited by attackers to devise new ways to compromise a system. To address these problems, research security solutions have evolved considerably over the years to cope with new attacks and threat models. In this work, we review the protection strategies proposed in the literature and show how some of the solutions have been invalidated by new attacks, or threat models, that were previously not considered. The goal is to show the evolution of the threats, and of the related security and trust assumptions, in virtualized systems that have given rise to complex threat models and the corresponding sophistication of protection strategies to deal with such attacks. We also categorize threat models, security and trust assumptions, and attacks against a virtualized system at the different layers—in particular, hardware, virtualization, OS, and application.",
"title": ""
},
{
"docid": "cca94491276328a03e0a56e7460bf50f",
"text": "Because of large amounts of unstructured data generated on the Internet, entity relation extraction is believed to have high commercial value. Entity relation extraction is a case of information extraction and it is based on entity recognition. This paper firstly gives a brief overview of relation extraction. On the basis of reviewing the history of relation extraction, the research status of relation extraction is analyzed. Then the paper divides theses research into three categories: supervised machine learning methods, semi-supervised machine learning methods and unsupervised machine learning method, and toward to the deep learning direction.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
},
{
"docid": "170cd125882865150428b521d6220929",
"text": "In this paper, we propose a novel approach for action classification in soccer videos using a recurrent neural network scheme. Thereby, we extract from each video action at each timestep a set of features which describe both the visual content (by the mean of a BoW approach) and the dominant motion (with a key point based approach). A Long Short-Term Memory-based Recurrent Neural Network is then trained to classify each video sequence considering the temporal evolution of the features for each timestep. Experimental results on the MICC-Soccer-Actions-4 database show that the proposed approach outperforms classification methods of related works (with a classification rate of 77 %), and that the combination of the two features (BoW and dominant motion) leads to a classification rate of 92 %.",
"title": ""
},
{
"docid": "088d6f1cd3c19765df8a16cd1a241d18",
"text": "Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.",
"title": ""
}
] |
scidocsrr
|
281a38b008e433a49825c69381ae6e7e
|
Automatically Predicting Peer-Review Helpfulness
|
[
{
"docid": "6d2abcdd728a2355259c60c870b411a4",
"text": "Although providing feedback is commonly practiced in education, there is no general agreement regarding what type of feedback is most helpful and why it is helpful. This study examined the relationship between various types of feedback, potential internal mediators, and the likelihood of implementing feedback. Five main predictions were developed from the feedback literature in writing, specifically regarding feedback features (summarization, identifying problems, providing solutions, localization, explanations, scope, praise, and mitigating language) as they relate to potential causal mediators of problem or solution understanding and problem or solution agreement, leading to the final outcome of feedback implementation. To empirically test the proposed feedback model, 1,073 feedback segments from writing assessed by peers was analyzed. Feedback was collected using SWoRD, an online peer review system. Each segment was coded for each of the feedback features, implementation, agreement, and understanding. The correlations between the feedback features, levels of mediating variables, and implementation rates revealed several significant relationships. Understanding was the only significant mediator of implementation. Several feedback features were associated with understanding: including solutions, a summary of the performance, and the location of the problem were associated with increased understanding; and explanations of problems were associated with decreased understanding. Implications of these results are discussed.",
"title": ""
},
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
}
] |
[
{
"docid": "711b8ac941db1e6e1eef093ca340717b",
"text": "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety critical domains. However, traditional software testing methodology, including test coverage criteria and test case generation algorithms, cannot be applied directly to DNNs. This paper bridges this gap. First, inspired by the traditional MC/DC coverage criterion, we propose a set of four test criteria that are tailored to the distinct features of DNNs. Our novel criteria are incomparable and complement each other. Second, for each criterion, we give an algorithm for generating test cases based on linear programming (LP). The algorithms produce a new test case (i.e., an input to the DNN) by perturbing a given one. They encode the test requirement and a fragment of the DNN by fixing the activation pattern obtained from the given input example, and then minimize the difference between the new and the current inputs. Finally, we validate our method on a set of networks trained on the MNIST dataset. The utility of our method is shown experimentally with four objectives: (1) bug finding; (2) DNN safety statistics; (3) testing efficiency and (4) DNN internal structure analysis.",
"title": ""
},
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "6bdcac1d424162a89adac7fa2a6221ae",
"text": "The growing popularity of online product review forums invites people to express opinions and sentiments toward the products .It gives the knowledge about the product as well as sentiment of people towards the product. These online reviews are very important for forecasting the sales performance of product. In this paper, we discuss the online review mining techniques in movie domain. Sentiment PLSA which is responsible for finding hidden sentiment factors in the reviews and ARSA model used to predict sales performance. An Autoregressive Sentiment and Quality Aware model (ARSQA) also in consideration for to build the quality for predicting sales performance. We propose clustering and classification based algorithm for sentiment analysis.",
"title": ""
},
{
"docid": "ee9d84f08326cf48116337595dbe07f7",
"text": "Facial fractures were described as early as the seventeenth century BC in the Edwin Smith surgical papyrus. In the eighteenth century, the French surgeon Desault described the unique propensity of the mandible to fracture in the narrow subcondylar region, which is commonly observed to this day. In a recent 5-year review of the National Trauma Data Base with more than 13,000 mandible fractures, condylar and subcondylar fractures made up 14.8% and 12.6% of all fractures respectively; taken together, more than any other site alone. This study, along with others, have confirmed that most modern-age condylar fractures occur in men, and are most often caused by motor vehicle accidents, and assaults. Historically, condylar fractures were managed in a closed fashion with various forms of immobilization or maxillomandibular fixation, with largely favorable results. Although the goals of treatment are the restoration of form and function, closed treatment relies on patient adaptation to an altered anatomy, because anatomic repositioning of the proximal segment is not achieved. However, the human body has a remarkable ability to adapt, and it remains an appropriate treatment of a large number of condylar fractures, including intracapsular fractures, fractures with minimal or no displacement, almost all pediatric condylar fractures, and fractures in patients whose medical or social situations preclude other forms of treatment. With advances in the understanding of osteosynthesis and an appreciation of surgical anatomy, open",
"title": ""
},
{
"docid": "7b6e811ea3f227c33755049355949eaf",
"text": "We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Indeed, our formulation even admits a closed form solution. This solution possesses several very attractive propertie s: (i) an innate geometric appeal through the Riemannian geometry of positive definite matrices; (ii) ease of interpretability; and (iii) computational speed several orders of magnitude faster tha n the widely used LMNN and ITML methods. Furthermore, on standard benchmark datasets, our closed-form solution consist ently attains higher classification accuracy.",
"title": ""
},
{
"docid": "1ecf6f45f0dabd484bc736a5b54fda91",
"text": "BACKGROUND\nDaily suppressive therapy with valacyclovir reduces risk of sexual transmission of herpes simplex virus type 2 (HSV-2) in HSV-2-serodiscordant heterosexual couples by 48%. Whether suppressive therapy reduces HSV-2 transmission from persons coinfected with HSV-2 and human immunodeficiency virus type 1 (HIV-1) is unknown.\n\n\nMETHODS\nWithin a randomized trial of daily acyclovir 400 mg twice daily in African HIV-1 serodiscordant couples, in which the HIV-1-infected partner was HSV-2 seropositive, we identified partnerships in which HIV-1-susceptible partners were HSV-2 seronegative to estimate the effect of acyclovir on risk of HSV-2 transmission.\n\n\nRESULTS\nWe randomly assigned 911 HSV-2/HIV-1-serodiscordant couples to daily receipt of acyclovir or placebo. We observed 68 HSV-2 seroconversions, 40 and 28 in acyclovir and placebo groups, respectively (HSV-2 incidence, 5.1 cases per 100 person-years; hazard ratio [HR], 1.35 [95% confidence interval, .83-2.20]; P = .22). Among HSV-2-susceptible women, vaginal drying practices (adjusted HR, 44.35; P = .004) and unprotected sex (adjusted HR, 9.91; P = .002) were significant risk factors for HSV-2 acquisition; having more children was protective (adjusted HR, 0.47 per additional child; P = .012). Among HSV-2-susceptible men, only age ≤30 years was associated with increased risk of HSV-2 acquisition (P = .016).\n\n\nCONCLUSIONS\nTreatment of African HSV-2/HIV-1-infected persons with daily suppressive acyclovir did not decrease risk of HSV-2 transmission to susceptible partners. More-effective prevention strategies to reduce HSV-2 transmission from HIV-1-infected persons are needed.",
"title": ""
},
{
"docid": "06129167c187b96e3c064e05c2b475f8",
"text": "Elderly patients with acute myeloid leukemia (AML) who are refractory to or relapse following frontline treatment constitute a poor-risk group with a poor long-term outcome. Host-related factors and unfavorable disease-related features contribute to early treatment failures following frontline therapy, thus making attainment of remission and long-term survival with salvage therapy particularly challenging for elderly patients. Currently, no optimal salvage strategy exists for responding patients, and allogeneic hematopoietic stem cell transplant is the only curative option in this setting; however, the vast majority of elderly patients are not candidates for this procedure due to poor functional status secondary to age and age-related comorbidities. Furthermore, the lack of effective salvage programs available for elderly patients with recurrent AML underscores the need for therapies that consistently yield durable remissions or durable control of their disease. The purpose of this review was to highlight the currently available strategies, as well as future strategies under development, for treating older patients with recurrent AML.",
"title": ""
},
{
"docid": "81c90998c5e456be34617e702dbfa4f5",
"text": "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, `2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts. Introduction The dimension of data is often very high in many domains (Jain and Zongker 1997; Guyon and Elisseeff 2003), such as image and video understanding (Wang et al. 2009a; 2009b), and bio-informatics. In practice, not all the features are important and discriminative, since most of them are often correlated or redundant to each other, and sometimes noisy (Duda, Hart, and Stork 2001; Liu, Wu, and Zhang 2011). These features may result in adverse effects in some learning tasks, such as over-fitting, low efficiency and poor performance (Liu, Wu, and Zhang 2011). Consequently, it is necessary to reduce dimensionality, which can be achieved by feature selection or transformation to a low dimensional space. In this paper, we focus on feature selection, which is to choose discriminative features by eliminating the ones with little or no predictive information based on certain criteria. Many feature selection algorithms have been proposed, which can be classified into three main families: filter, wrapper, and embedded methods. The filter methods (Duda, Hart, Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Stork 2001; He, Cai, and Niyogi 2005; Zhao and Liu 2007; Masaeli, Fung, and Dy 2010; Liu, Wu, and Zhang 2011; Yang et al. 2011a) use statistical properties of the features to filter out poorly informative ones. They are usually performed before applying classification algorithms. They select a subset of features only based on the intrinsic properties of the data. In the wrapper approaches (Guyon and Elisseeff 2003; Rakotomamonjy 2003), feature selection is “wrapped” in a learning algorithm and the classification performance of features is taken as the evaluation criterion. Embedded methods (Vapnik 1998; Zhu et al. 2003) perform feature selection in the process of model construction. In contrast with filter methods, wrapper and embedded methods are tightly coupled with in-built classifiers, which causes that they are less generality and computationally expensive. In this paper, we focus on the filter feature selection algorithm. Because of the importance of discriminative information in data analysis, it is beneficial to exploit discriminative information for feature selection, which is usually encoded in labels. However, how to select discriminative features in unsupervised scenarios is a significant but hard task due to the lack of labels. In light of this, we propose a novel unsupervised feature selection algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), in this paper. We perform spectral clustering and feature selection simultaneously to select the discriminative features for unsupervised learning. The cluster label indicators are obtained by spectral clustering to guide the feature selection procedure. Different from most of the previous spectral clustering algorithms (Shi and Malik 2000; Yu and Shi 2003), we explicitly impose a nonnegative constraint into the objective function, which is natural and reasonable as discussed later in this paper. With nonnegative and orthogonality constraints, the learned cluster indicators are much closer to the ideal results and can be readily utilized to obtain cluster labels. Our method exploits the discriminative information and feature correlation in a joint framework. For the sake of feature selection, the feature selection matrix is constrained to be sparse in rows, which is formulated as `2,1-norm minimization term. To solve the proposed problem, a simple yet effective iterative algorithm is proposed. Extensive experiments are conducted on different datasets, which show that the proposed approach outperforms the state-of-the-arts in different applications. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "0105070bd23400083850627b1603af0b",
"text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.",
"title": ""
},
{
"docid": "dc198f396142376e36d7143a5bfe7d19",
"text": "Successful direct pulp capping of cariously exposed permanent teeth with reversible pulpitis and incomplete apex formation can prevent the need for root canal treatment. A case report is presented which demonstrates the use of mineral trioxide aggregate (MTA) as a direct pulp capping material for the purpose of continued maturogenesis of the root. Clinical and radiographic follow-up demonstrated a vital pulp and physiologic root development in comparison with the contralateral tooth. MTA can be considered as an effective material for vital pulp therapy, with the goal of maturogenesis.",
"title": ""
},
{
"docid": "4cfedb5e516692b12a610c4211e6fdd4",
"text": "Supporters of market-based education reforms argue that school autonomy and between-school competition can raise student achievement. Yet U.S. reforms based in part on these ideas charter schools, school-based management, vouchers and school choice are limited in scope, complicating evaluations of their impact. In contrast, a series of remarkable reforms enacted by the Thatcher Government in Britain in the 1980s provide an ideal testing ground for examining the effects of school autonomy and between-school competition. In this paper I study one reform described by Chubb and Moe (1992) as ‘truly revolutionary’ that allowed public high schools to ‘opt out’ of the local school authority and become quasi-independent, funded directly by central Government. In order to opt out schools had to first win a majority vote of current parents, and I assess the impact of school autonomy via a regression discontinuity design, comparing student achievement levels at schools where the vote barely won to those where it barely lost. To assess the effects of competition I use this same idea to compare student achievement levels at neighbouring schools of barely winners to neighbouring schools of barely losers. My results suggest two conclusions. First, there were large gains to schools that won the vote and opted out, on the order of a onequarter standard deviation improvement on standardised national examinations. Since results improved for those students already enrolled in the school at the time of the vote, this outcome is not likely to be driven by changes in student-body composition (cream-skimming). Second, the gains enjoyed by the opted-out schools appear not to have spilled over to their neighbours I can never reject the hypothesis of no spillovers and can always reject effects bigger than one half of the ‘own-school’ impact. I interpret my results as supportive of education reforms that seek to hand power to schools, with the caveat that I do not know precisely what opted-out schools did to improve. With regards to competition, although I cannot rule out small but economically important competition effects, my results suggest caution as to the likely benefits.",
"title": ""
},
{
"docid": "86c19291942c1eeeb38abd1531801731",
"text": "There exist a lot of challenges in trajectory planning for autonomous driving: 1) Needs of both spatial and temporal planning for highly dynamic environments; 2) Nonlinear vehicle models and non-convex collision avoidance constraints. 3) High computational efficiency for real-time implementation. Iterative Linear Quadratic Regulator (ILQR) is an algorithm which solves predictive optimal control problem with nonlinear system very efficiently. However, it can not deal with constraints. In this paper, the Constrained Iterative LQR (CILQR) is proposed to handle the constraints in ILQR. Then an on road driving problem is formulated. Simulation case studies show the capability of the CILQR algorithm to solve the on road driving motion planning problem.",
"title": ""
},
{
"docid": "bc1f6f7a18372ce618c82f94a3091fd9",
"text": "THE INTERNATIONAL JOURNAL OF ESTHETIC DENTISTRY The management of individual cases presents each clinician with a variety of attractive options and sophisticated evidence-based solutions. Financial constraints can often restrict these options and limit the choice pathways that can be offered. The case presented here demonstrates the management of severe erosion on the maxillary anterior teeth via a minimally invasive, practical, and economic route. When tooth surface loss occurs,1 it can be clinically challenging to isolate a single etiological factor since it is usually multifactorial in origin. The patient presented with the classic signs of erosion (Fig 1a). A major causative factor of this erosion was a large consumption of carbonated beverages on a daily basis over a number of years. Chronic exposure of dental hard tissues to acidic substrates led to extensive enamel and dentin loss from both intrinsic and extrinsic sources (Fig 1b and c). The ACE classification guides the clinician on the management options of treatment modalities, which are dependent on the severity of the erosion.2 A clinical case involving severe erosion of the maxillary anterior teeth restored with direct composite resin restorations",
"title": ""
},
{
"docid": "e1485bddbab0c3fa952d045697ff2112",
"text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.",
"title": ""
},
{
"docid": "34546e42bd78161259d2bc190e36c9f7",
"text": "Peer to Peer networks are the leading cause for music piracy but also used for music sampling prior to purchase. In this paper we investigate the relations between music file sharing and sales (both physical and digital)using large Peer-to-Peer query database information. We compare file sharing information on songs to their popularity on the Billboard Hot 100 and the Billboard Digital Songs charts, and show that popularity trends of songs on the Billboard have very strong correlation (0.88-0.89) to their popularity on a Peer-to-Peer network. We then show how this correlation can be utilized by common data mining algorithms to predict a song's success in the Billboard in advance, using Peer-to-Peer information.",
"title": ""
},
{
"docid": "9bb0ee77990ead987b49ab4180edd99f",
"text": "Stacked graphs are a visualization technique popular in casual scenarios for representing multiple time-series. Variations of stacked graphs have been focused on reducing the distortion of individual streams because foundational perceptual studies suggest that variably curved slopes may make it difficult to accurately read and compare values. We contribute to this discussion by formally comparing the relative readability of basic stacked area charts, ThemeRivers, streamgraphs and our own interactive technique for straightening baselines of individual streams in a ThemeRiver. We used both real-world and randomly generated datasets and covered tasks at the elementary, intermediate and overall information levels. Results indicate that the decreased distortion of the newer techniques does appear to improve their readability, with streamgraphs performing best for value comparison tasks. We also found that when a variety of tasks is expected to be performed, using the interactive version of the themeriver leads to more correctness at the cost of being slower for value comparison tasks.",
"title": ""
},
{
"docid": "b9e785238c4fb438bada46f196915cdc",
"text": "* Faculty of Information Technology, Rangsit University. Abstract With the rapidly increasing number of Thai text documents available in digital media and websites, it is important to find an efficient text indexing technique to facilitate search and retrieval. An efficient index would speed up the response time and improve the accessibility of the documents. Up to now, not much research in Thai text indexing has been conducted as compared to more commonly used languages like English or other European languages. In Thai text indexing, the extraction of indexing terms becomes a main issue because they cannot be specified automatically from text documents, due to the nature of Thai texts being non-segmented. As a result, there are many challenges for indexing Thai text documents. The ma-jority of Thai text indexing techniques can be divided into two main categories: a language-dependent technique and a lan-guage-independent technique as will be described in this paper.",
"title": ""
},
{
"docid": "68810ad35e71ea7d080e7433e227e40e",
"text": "Mobile devices, ubiquitous in modern lifestyle, embody and provide convenient access to our digital lives. Being small and mobile, they are easily lost or stole, therefore require strong authentication to mitigate the risk of unauthorized access. Common knowledge-based mechanism like PIN or pattern, however, fail to scale with the high frequency but short duration of device interactions and ever increasing number of mobile devices carried simultaneously. To overcome these limitations, we present CORMORANT, an extensible framework for risk-aware multi-modal biometric authentication across multiple mobile devices that offers increased security and requires less user interaction.",
"title": ""
},
{
"docid": "f6ec04f704c58514865206f759ac6d67",
"text": "Speech recognition is the key to realize man-machine interface technology. In order to improve the accuracy of speech recognition and implement the module on embedded system, an embedded speaker-independent isolated word speech recognition system based on ARM is designed after analyzing speech recognition theory. The system uses DTW algorithm and improves the algorithm using a parallelogram to extract characteristic parameters and identify the results. To finish the speech recognition independently, the system uses the STM32 series chip combined with the other external circuitry. The results of speech recognition test can achieve 90%, and which meets the real-time requirements of recognition.",
"title": ""
},
{
"docid": "2f0d6b9bee323a75eea3d15a3cabaeb6",
"text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.",
"title": ""
}
] |
scidocsrr
|
3439d981bf62de851f1d7d695df797d1
|
AutoCog: Measuring the Description-to-permission Fidelity in Android Applications
|
[
{
"docid": "b91c93a552e7d7cc09d477289c986498",
"text": "Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
}
] |
[
{
"docid": "0fb16cdc0b8b8371493fb57cbfacec4f",
"text": "Recent years have seen an expansion of interest in non-pharmacological interventions for attention-deficit/hyperactivity disorder (ADHD). Although considerable treatment development has focused on cognitive training programs, compelling evidence indicates that intense aerobic exercise enhances brain structure and function, and as such, might be beneficial to children with ADHD. This paper reviews evidence for a direct impact of exercise on neural functioning and preliminary evidence that exercise may have positive effects on children with ADHD. At present, data are promising and support the need for further study, but are insufficient to recommend widespread use of such interventions for children with ADHD.",
"title": ""
},
{
"docid": "28b2da27bf62b7989861390a82940d88",
"text": "End users are said to be “the weakest link” in information systems (IS) security management in the workplace. they often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. to fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. the results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. this study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. this study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations. KeY words and PHrases: information systems security, nonlinear construct relationships, nonmalicious security violation, perceived identity match, perceived security risk, relative advantage for job performance, workgroup norms. information sYstems (is) securitY Has become a major cHallenGe for organizations thanks to the increasing corporate use of the Internet and, more recently, wireless networks. In the 2010 computer Security Institute (cSI) survey of computer security practitioners in u.S. organizations, more than 41 percent of the respondents reported security incidents [68]. In the united Kingdom, a similar survey found that 45 percent of the participating companies had security incidents in 2008 [37]. While the causes for these security incidents may be difficult to fully identify, it is generally understood that insiders from within organizations pose a major threat to IS security [36, 55]. For example, peer-to-peer file-sharing software installed by employees may cause inadvertent disclosure of sensitive business information over the Internet [41]. Employees writing down passwords on a sticky note or choosing easy-to-guess passwords may risk having their system access privilege be abused by others [98]. the 2010 cSI survey found that nonmalicious insiders are a big issue [68]. according to the survey, more than 14 percent of the respondents reported that nearly all their losses were due to nonmalicious, careless behaviors of insiders. Indeed, end users are often viewed as “the weakest link” in the IS security chain [73], and fundamentally IS security has a “behavioral root” [94]. uNDErStaNDING NONMalIcIOuS SEcurItY VIOlatIONS IN tHE WOrKPlacE 205 a frequently recommended organizational measure for dealing with internal threats posed by end user behavior is security policy [6]. For example, a security policy may specify what end users should (or should not) do with organizational IS assets, and it may also spell out the consequences of policy violations. Having a policy in place, however, does not necessarily guarantee security because end users may not always act as prescribed [7]. a practitioner survey found that even if end users were aware of potential security problems related to their actions, many of them did not follow security best practices and continued to engage in behaviors that could open their organizations’ IS to serious security risks [62]. For example, the survey found that many employees allowed others to use their computing devices at work despite their awareness of possible security implications. It was also reported that many end users do not follow policies and some of them knowingly violate policies without worry of repercussions [22]. this phenomenon raises an important question: What factors motivate end users to engage in such behaviors? the role of motivation has not been considered seriously in the IS security literature [75] and our understanding of the factors that motivate those undesirable user behaviors is still very limited. to fill this gap, the current study aims to investigate factors that influence end user attitudes and behavior toward organizational IS security. the rest of the paper is organized as follows. In the next section, we review the literature on end user security-related behaviors. We then propose a theoretical model of nonmalicious security violation and develop related hypotheses. this is followed by discussions of our research methods and data analysis. In the final section, we discuss our findings, implications for research and practice, limitations, and further research directions.",
"title": ""
},
{
"docid": "76d4ed8e7692ca88c6b5a70c9954c0bd",
"text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7f23ddb60394659cdf48ea4df68ae6b",
"text": "OBJECTIVES\nWe hypothesized reduction of 30 days' in-hospital morbidity, mortality, and length of stay postimplementation of the World Health Organization's Surgical Safety Checklist (SSC).\n\n\nBACKGROUND\nReductions of morbidity and mortality have been reported after SSC implementation in pre-/postdesigned studies without controls. Here, we report a randomized controlled trial of the SSC.\n\n\nMETHODS\nA stepped wedge cluster randomized controlled trial was conducted in 2 hospitals. We examined effects on in-hospital complications registered by International Classification of Diseases, Tenth Revision codes, length of stay, and mortality. The SSC intervention was sequentially rolled out in a random order until all 5 clusters-cardiothoracic, neurosurgery, orthopedic, general, and urologic surgery had received the Checklist. Data were prospectively recorded in control and intervention stages during a 10-month period in 2009-2010.\n\n\nRESULTS\nA total of 2212 control procedures were compared with 2263 SCC procedures. The complication rates decreased from 19.9% to 11.5% (P < 0.001), with absolute risk reduction 8.4 (95% confidence interval, 6.3-10.5) from the control to the SSC stages. Adjusted for possible confounding factors, the SSC effect on complications remained significant with odds ratio 1.95 (95% confidence interval, 1.59-2.40). Mean length of stay decreased by 0.8 days with SCC utilization (95% confidence interval, 0.11-1.43). In-hospital mortality decreased significantly from 1.9% to 0.2% in 1 of the 2 hospitals post-SSC implementation, but the overall reduction (1.6%-1.0%) across hospitals was not significant.\n\n\nCONCLUSIONS\nImplementation of the WHO SSC was associated with robust reduction in morbidity and length of in-hospital stay and some reduction in mortality.",
"title": ""
},
{
"docid": "4729691ffa6e252187a1a663e85fde8b",
"text": "Language models are used in automatic transcription system to resolve ambiguities. This is done by limiting the vocabulary of words that can be recognized as well as estimating the n-gram probability of the words in the given text. In the context of historical documents, a non-unified spelling and the limited amount of written text pose a substantial problem for the selection of the recognizable vocabulary as well as the computation of the word probabilities. In this paper we propose for the transcription of historical Spanish text to keep the corpus for the n-gram limited to a sample of the target text, but expand the vocabulary with words gathered from external resources. We analyze the performance of such a transcription system with different sizes of external vocabularies and demonstrate the applicability and the significant increase in recognition accuracy of using up to 300 thousand external words.",
"title": ""
},
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "412951e42529d7862cb0bcbaf5bd9f97",
"text": "Wireless Sensor Network is an emerging field which is accomplishing much importance because of its vast contribution in varieties of applications. Wireless Sensor Networks are used to monitor a given field of interest for changes in the environment. Coverage is one of the main active research interests in WSN.In this paper we aim to review the coverage problem In WSN and the strategies that are used in solving coverage problem in WSN.These strategies studied are used during deployment phase of the network. Besides this we also outlined some basic design considerations in coverage of WSN.We also provide a brief summary of various coverage issues and the various approaches for coverage in Sensor network. Keywords— Coverage; Wireless sensor networks: energy efficiency; sensor; area coverage; target Coverage.",
"title": ""
},
{
"docid": "4306bc8a6f1e1bab2ffeb175d7dfeb0f",
"text": "This paper describes the design and evaluation of a method for developing a chat-oriented dialog system by utilizing real human-to-human conversation examples from movie scripts and Twitter conversations. The aim of the proposed method is to build a conversational agent that can interact with users in as natural a fashion as possible, while reducing the time requirement for database design and collection. A number of the challenging design issues we faced are described, including (1) constructing an appropriate dialog corpora from raw movie scripts and Twitter data, and (2) developing an multi domain chat-oriented dialog management system which can retrieve a proper system response based on the current user query. To build a dialog corpus, we propose a unit of conversation called a tri-turn (a trigram conversation turn), as well as extraction and semantic similarity analysis techniques to help ensure that the content extracted from raw movie/drama script files forms appropriate dialog-pair (query-response) examples. The constructed dialog corpora are then utilized in a data-driven dialog management system. Here, various approaches are investigated including example-based (EBDM) and response generation using phrase-based statistical machine translation (SMT). In particular, we use two EBDM: syntactic-semantic similarity retrieval and TF-IDF based cosine similarity retrieval. Experiments are conducted to compare and contrast EBDM and SMT approaches in building a chat-oriented dialog system, and we investigate a combined method that addresses the advantages and disadvantages of both approaches. System performance was evaluated based on objective metrics (semantic similarity and cosine similarity) and human subjective evaluation from a small user study. Experimental results show that the proposed filtering approach effectively improve the performance. Furthermore, the results also show that by combing both EBDM and SMT approaches, we could overcome the shortcomings of each. key words: dialog corpora, response generation, example-based dialog modeling, semantic similarity, cosine similarity, machine translation",
"title": ""
},
{
"docid": "f7c73ca2b6cd6da6fec42076910ed3ec",
"text": "The goal of rating-based recommender systems is to make personalized predictions and recommendations for individual users by leveraging the preferences of a community of users with respect to a collection of items like songs or movies. Recommender systems are often based on intricate statistical models that are estimated from data sets containing a very high proportion of missing ratings. This work describes evidence of a basic incompatibility between the properties of recommender system data sets and the assumptions required for valid estimation and evaluation of statistical models in the presence of missing data. We discuss the implications of this problem and describe extended modelling and evaluation frameworks that attempt to circumvent it. We present prediction and ranking results showing that models developed and tested under these extended frameworks can significantly outperform standard models.",
"title": ""
},
{
"docid": "c56e82343720095e74ec4a50a2190f7f",
"text": "In this paper, we present an accelerometer-based pen-type sensing device and a user-independent hand gesture recognition algorithm. Users can hold the device to perform hand gestures with their preferred handheld styles. Gestures in our system are divided into two types: the basic gesture and the complex gesture, which can be represented as a basic gesture sequence. A dictionary of 24 gestures, including 8 basic gestures and 16 complex gestures, is defined. An effective segmentation algorithm is developed to identify individual basic gesture motion intervals automatically. Through segmentation, each complex gesture is segmented into several basic gestures. Based on the kinematics characteristics of the basic gesture, 25 features are extracted to train the feedforward neural network model. For basic gesture recognition, the input gestures are classified directly by the feedforward neural network classifier. Nevertheless, the input complex gestures go through an additional similarity matching procedure to identify the most similar sequences. The proposed recognition algorithm achieves almost perfect user-dependent and user-independent recognition accuracies for both basic and complex gestures. Experimental results based on 5 subjects, totaling 1600 trajectories, have successfully validated the effectiveness of the feedforward neural network and similarity matching-based gesture recognition algorithm.",
"title": ""
},
{
"docid": "5b0530f94f476754034c92292e02b390",
"text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two",
"title": ""
},
{
"docid": "2f1690d7e1ee4aeca5be28faf80917fa",
"text": "The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.",
"title": ""
},
{
"docid": "7999684c9cf090c897056b9eb6929af3",
"text": "Ethnically differentiated societies are often regarded as dysfunctional, with poor economic performance and a high risk of violent civil conflict. I argue that this is not well-founded. I distinguish between dominance, in which one group constitutes a majority, and fractionalisation, in which there are many small groups. In terms of overall economic performance, I show that both theoretically and empirically, fractionalisation is normally unproblematic in democracies, although it can be damaging in dictatorships. Fractionalised societies have worse public sector performance, but this is offset by better private sector performance. Societies characterised by dominance are in principle likely to have worse economic performance, but empirically the effect is weak. In terms of the risk of civil war, I show that both theoretically and empirically fractionalisation actually makes societies safer, while dominance increases the risk of conflict. A policy implication is that fractionalised societies are viable and secession should be discouraged. P ub lic D is cl os ur e A ut ho riz ed",
"title": ""
},
{
"docid": "c34d4d0e3dcf52aba737a87877d55f49",
"text": "Building Information Modeling is based on the idea of the continuous use of digital building models throughout the entire lifecycle of a built facility, starting from the early conceptual design and detailed design phases, to the construction phase, and the long phase of operation. BIM significantly improves information flow between stakeholders involved at all stages, resulting in an increase in efficiency by reducing the laborious and error-prone manual re-entering of information that dominates conventional paper-based workflows. Thanks to its many advantages, BIM is already practiced in many construction projects throughout the entire world. However, the fragmented nature of the construction industry still impedes its more widespread use. Government initiatives around the world play an important role in increasing BIM adoption: as the largest client of the construction industry in many countries, the state has the power to significantly change its work practices. This chapter discusses the motivation for applying BIM, offers a detailed definition of BIM along with an overview of typical use cases, describes the common BIM maturity grades and reports on BIM adoption levels in various countries around the globe. A. Borrmann ( ) Chair of Computational Modeling and Simulation, Technical University of Munich, München, Germany e-mail: andre.borrmann@tum.de M. König Chair of Computing in Engineering, Ruhr University Bochum, Bochum, Germany e-mail: koenig@inf.bi.rub.de C. Koch Chair of Intelligent Technical Design, Bauhaus-Universität Weimar, Weimar, Germany e-mail: c.koch@uni-weimar.de J. Beetz Chair of Design Computation, RWTH Aachen University, Aachen, Germany e-mail: j.beetz@caad.arch.rwth-aachen.de © Springer International Publishing AG, part of Springer Nature 2018 A. Borrmann et al. (eds.), Building Information Modeling, https://doi.org/10.1007/978-3-319-92862-3_1 1 2 A. Borrmann et al. 1.1 Building Information Modeling: Why? In the last decade, digitalization has transformed a wide range of industrial sectors, resulting in a tremendous increase in productivity, product quality and product variety. In the Architecture, Engineering, Construction (AEC) industry, digital tools are increasingly adopted for designing, constructing and operating buildings and infrastructure assets. However, the continuous use of digital information along the entire process chain falls significantly behind other industry domains. All too often, valuable information is lost because information is still predominantly handed over in the form of drawings, either as physical printed plots on paper or in a digital but limited format. Such disruptions in the information flow occur across the entire lifecycle of a built facility: in its design, construction and operation phases as well as in the very important handovers between these phases. The planning and realization of built facilities is a complex undertaking involving a wide range of stakeholders from different fields of expertise. For a successful construction project, a continuous reconciliation and intense exchange of information among these stakeholders is necessary. Currently, this typically involves the handover of technical drawings of the construction project in graphical manner in the form of horizontal and vertical sections, views and detail drawings. The software used to create these drawings imitate the centuries-old way of working using a drawing board. However, line drawings cannot be comprehensively understood by computers. The information they contain can only be partially interpreted and processed by computational methods. Basing the information flow on drawings alone therefore fails to harness the great potential of information technology for supporting project management and building operation. A key problem is that the consistency of the diverse technical drawings can only be checked manually. This is a potentially massive source of errors, particularly if we take into account that the drawings are typically created by experts from different design disciplines and across multiple companies. Design changes are particularly challenging: if they are not continuously tracked and relayed to all related plans, inconsistencies can easily arise and often remain undiscovered until the actual construction – where they then incur significant extra costs for ad-hoc solutions on site. In conventional practice, design changes are marked only by means of revision clouds in the drawings, which can be hard to detect and ambiguous. The limited information depth of technical drawings also has a significant drawback in that information on the building design cannot be directly used by downstream applications for any kind of analysis, calculation and simulation, but must be re-entered manually which again requires unnecessary additional work and is a further source of errors. The same holds true for the information handover to the building owner after the construction is finished. He must invest considerable effort into extracting the required information for operating the building from the drawings and documents and enter it into a facility management system. At each of 1 Building Information Modeling: Why? What? How? 3 Conceptual Design Construction Detailed Design Operation Time Conventional workflows Digital workflows Information loss Project information Fig. 1.1 Loss of information caused by disruptions in the digital information flow. (Based on Eastman et al. 2008) these information exchange points, data that was once available in digital form is lost and has to be laboriously re-created (Fig. 1.1). This is where Building Information Modeling comes into play. By applying the BIM method, a much more profound use of computer technology in the design, engineering, construction and operation of built facilities is realized. Instead of recording information in drawings, BIM stores, maintains and exchanges information using comprehensive digital representations: the building information models. This approach dramatically improves the coordination of the design activities, the integration of simulations, the setup and control of the construction process, as well as the handover of building information to the operator. By reducing the manual re-entering of data to a minimum and enabling the consequent re-use of digital information, laborious and error-prone work is avoided, which in turn results in an increase in productivity and quality in construction projects. Other industry sectors, such as the automotive industry, have already undergone the transition to digitized, model-based product development and manufacturing which allowed them to achieve significant efficiency gains (Kagermann 2015). The Architecture Engineering and Construction (AEC) industry, however, has its own particularly challenging boundary conditions: first and foremost, the process and value creation chain is not controlled by one company, but is dispersed across a large number of enterprises including architectural offices, engineering consultancies, and construction firms. These typically cooperate only for the duration of an individual construction project and not for a longer period of time. Consequently, there are a large number of interfaces in the ad-hoc network of companies where digital information has to be handed over. As these information flows must be supervised and controlled by a central instance, the onus is on the building owner to specify and enforce the use of Building Information Modeling. 4 A. Borrmann et al. 1.2 Building Information Modeling: What? A Building Information Model is a comprehensive digital representation of a built facility with great information depth. It typically includes the three-dimensional geometry of the building components at a defined level of detail. In addition, it also comprises non-physical objects, such as spaces and zones, a hierarchical project structure, or schedules. Objects are typically associated with a well-defined set of semantic information, such as the component type, materials, technical properties, or costs, as well as the relationships between the components and other physical or logical entities (Fig. 1.2). The term Building Information Modeling (BIM) consequently describes both the process of creating such digital building models as well as the process of maintaining, using and exchanging them throughout the entire lifetime of the built facility (Fig. 1.3). The US National Building Information Modeling Standard defines BIM as follows (NIBS 2012): Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. Fig. 1.2 A BIM model comprises both the 3D geometry of each building element as well as a rich set of semantic information provided by attributes and relationships 1 Building Information Modeling: Why? What? How? 5 Construction Detailed Design Operation Conceptual Design Modification Demolition Facility Management, Maintenance, Repair Cost estimation Design Options Progress Monitoring Simulations and Analyses Logistics Process Simulation Coordination Visualization Spatial Program",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "6a922e97c878c4d1769e1101f5026cf9",
"text": "Human activities create waste, and it is the way these wastes are handled, stored, collected and disposed of, which can pose risks to the environment and to public health. Where intense human activities concentrate, such as in urban centres, appropriate and safe solid waste management (SWM) are of utmost importance to allow healthy living conditions for the population. This fact has been acknowledged by most governments, however many municipalities are struggling to provide even the most basic services. Typically one to two thirds of the solid waste generated is not collected (World Resources Institute, et al., 1996). As a result, the uncollected waste, which is often also mixed with human and animal excreta, is dumped indiscriminately in the streets and in drains, so contributing to flooding, breeding of insect and rodent vectors and the spread of diseases (UNEP-IETC, 1996). Most of the municipal solid waste in low-income Asian countries which is collected is dumped on land in a more or less uncontrolled manner. Such inadequate waste disposal creates serious environmental problems that affect health of humans and animals and cause serious economic and other welfare losses. The environmental degradation caused by inadequate disposal of waste can be expressed by the contamination of surface and ground water through leachate, soil contamination through direct waste contact or leachate, air pollution by burning of wastes, spreading of diseases by different vectors like birds, insects and rodents, or uncontrolled release of methan by anaerobic decomposition of waste",
"title": ""
},
{
"docid": "0c509f98c65a48c31d32c0c510b4c13f",
"text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "c8ae7431c6be27e9b427fd022db03a53",
"text": "Deep learning systems have dramatically improved the accuracy of speech recognition, and various deep architectures and learning methods have been developed with distinct strengths and weaknesses in recent years. How can ensemble learning be applied to these varying deep learning systems to achieve greater recognition accuracy is the focus of this paper. We develop and report linear and log-linear stacking methods for ensemble learning with applications specifically to speechclass posterior probabilities as computed by the convolutional, recurrent, and fully-connected deep neural networks. Convex optimization problems are formulated and solved, with analytical formulas derived for training the ensemble-learning parameters. Experimental results demonstrate a significant increase in phone recognition accuracy after stacking the deep learning subsystems that use different mechanisms for computing high-level, hierarchical features from the raw acoustic signals in speech.",
"title": ""
}
] |
scidocsrr
|
d91a21842162444aca9a7924048a8291
|
Comparing Extant Story Classifiers: Results & New Directions
|
[
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
},
{
"docid": "75321b85809e5954e78675c8827fefd5",
"text": "Text annotations are of great use to researchers in the language sciences, and much effort has been invested in creating annotated corpora for an wide variety of purposes. Unfortunately, software support for these corpora tends to be quite limited: it is usually ad-hoc, poorly designed and documented, or not released for public use. I describe an annotation tool, the Story Workbench, which provides a generic platform for text annotation. It is free, open-source, cross-platform, and user friendly. It provides a number of common text annotation operations, including representations (e.g., tokens, sentences, parts of speech), functions (e.g., generation of initial annotations by algorithm, checking annotation validity by rule, fully manual manipulation of annotations) and tools (e.g., distributing texts to annotators via version control, merging doubly-annotated texts into a single file). The tool is extensible at many different levels, admitting new representations, algorithm, and tools. I enumerate ten important features and illustrate how they support the annotation process at three levels: (1) annotation of individual texts by a single annotator, (2) double-annotation of texts by two annotators and an adjudicator, and (3) annotation scheme development. The Story Workbench is scheduled for public release in March 2012. Text annotations are of great use to researchers in the language sciences: a large fraction of that work relies on annotated data to build, train, or test their systems. Good examples are the Penn Treebank, which catalyzed work in developing statistical syntactic parsers, and PropBank, which did the same for semantic role labeling. It is not an exaggeration to say that annotated corpora are a central resource for these fields, and are only growing in importance. Work on narrative shares many of the same problems, and as a consequence has much to gain from advances in language annotation tools and techniques. Despite the importance of annotated data, there remains a missing link: software support is not given nearly the same amount of attention as the annotations themselves. Researchers usually release only the data; if they release any tools at all, they are usually ad-hoc, poorly designed and Copyright c © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. documented, or just not released for public use. Tools do not build on one another. The language sciences need to move to a standard where, if annotated data is released, software for accessing and creating the data are released as a matter of course. Researchers should prepare for it, reviewers should demand it, and readers should expect it. One way of facilitating this is to lower the barrier for creating tools. Many of the phases of the annotation cycle are the same no matter what sort of annotation you are doing a freely available tool, or suite of tools, to support these phases would go a long way. I describe the Story Workbench (Finlayson 2008), a major step toward just such a tool suite. The Story Workbench is free, open-source, extensible, cross-platform, and user friendly. It is a working piece of software, having been in beta testing for over three years, with a public release scheduled for March 2012. It has been used by more than 12 annotators to annotate over 100k words across 17 representations. Two corpora have been created so far with it: the UMIREC corpus (Hervas and Finlayson 2010) comprising 25k words of news and folktales annotated for referring expression structure, and 18k words of Russian folktales annotated in all 17 different representations. The Story Workbench is especially interesting to researchers working on narrative. Understanding a narrative requires not just one representation, not just two, but a dozen or more. The Story Workbench was created specifically to overcome that problem, but is now finding application beyond the realm of narrative research. In particular, in the next section I describe three phases of the annotation process; many, if not most, annotation studies move through these phases. In the next section I enumerate some of the more important features of the Story Workbench, and show how these support the phases. Three Loops of the Annotation Process Conceptually, the process of producing a gold-standard annotated corpus can be split into at least three nested loops. In the widest, top-most loop the researchers design and vet the annotation scheme and annotation tool; embedded therein is the middle loop, where annotation teams produce goldannotated texts; embedded within that is the loop of the individual annotator working on individual texts. These nested loops are illustrated in Figure 1. 21 AAAI Technical Report WS-11-18",
"title": ""
},
{
"docid": "48cdea9a78353111d236f6d0f822dc3a",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
},
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
},
{
"docid": "104c9ef558234250d56ef941f09d6a7c",
"text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus",
"title": ""
}
] |
[
{
"docid": "5f513e3d58a10d2748983bfa06c11df2",
"text": "AIM\nThe aim of this study is to report a clinical case of oral nevus.\n\n\nBACKGROUND\nNevus is a congenital or acquired benign neoplasia that can be observed in the skin or mucous membranes. It is an uncommon condition in the oral mucosa. When it does occur, the preferred location is on the palate, followed by the cheek mucosa, lip and tongue.\n\n\nCASE REPORT\nIn this case study, we relate the diagnosis and treatment of a 23-year-old female patient with an irregular, pigmented lesion of the oral mucosa that underwent excisional biopsy resulting in a diagnosis of intramucosal nevus.\n\n\nCONCLUSION\nNevus can appear in the oral mucosa and should be removed.\n\n\nCLINICAL SIGNIFICANCE\nIt is important for dental professionals to adequately categorize and treat pigmented lesions in the mouth.",
"title": ""
},
{
"docid": "c50cf41ef8cc85be0558f9132c60b1f5",
"text": "A System Architecture for Context-Aware Mobile Computing William Noah Schilit Computer applications traditionally expect a static execution environment. However, this precondition is generally not possible for mobile systems, where the world around an application is constantly changing. This thesis explores how to support and also exploit the dynamic configurations and social settings characteristic of mobile systems. More specifically, it advances the following goals: (1) enabling seamless interaction across devices; (2) creating physical spaces that are responsive to users; and (3) and building applications that are aware of the context of their use. Examples of these goals are: continuing in your office a program started at home; using a PDA to control someone else’s windowing UI; automatically canceling phone forwarding upon return to your office; having an airport overheaddisplay highlight the flight information viewers are likely to be interested in; easily locating and using the nearest printer or fax machine; and automatically turning off a PDA’s audible e-mail notification when in a meeting. The contribution of this thesis is an architecture to support context-aware computing; that is, application adaptation triggered by such things as the location of use, the collection of nearby people, the presence of accessible devices and other kinds of objects, as well as changes to all these things over time. Three key issues are addressed: (1) the information needs of applications, (2) where applications get various pieces of information and (3) how information can be efficiently distributed. A dynamic environment communication model is introduced as a general mechanism for quickly and efficiently learning about changes occurring in the environment in a fault tolerant manner. For purposes of scalability, multiple dynamic environment servers store user, device, and, for each geographic region, context information. In order to efficiently disseminate information from these components to applications, a dynamic collection of multicast groups is employed. The thesis also describes a demonstration system based on the Xerox PARCTAB, a wireless palmtop computer.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "417ce84b9a4359ac3fb59b6c6497b7db",
"text": "OBJECTIVE\nWe describe a novel human-machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user's upper-body.\n\n\nAPPROACH\nA calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor.\n\n\nMAIN RESULTS\nOur results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body-machine interface systems as an alternative or complement to brain-machine interfaces for accomplishing cursor control in 2D space.\n\n\nSIGNIFICANCE\nThe present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick.",
"title": ""
},
{
"docid": "5c2b7f85bba45905c324f7d6a10e5e53",
"text": "We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: \n Mixture models with separated means: We study mixtures of poly(<i>k</i>)-many <i>k</i>-dimensional distributions where the means of every pair of distributions are separated by at least <i>k</i><sup>ε</sup>. In the special case of spherical Gaussian mixtures, we give a <i>k</i><sup><i>O</i>(1/ε)</sup>-time algorithm that learns the means assuming separation at least <i>k</i><sup>ε</sup>, for any ε> 0. This is the first algorithm to improve on greedy (“single-linkage”) and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation <i>k</i><sup>1/4</sup>. \n Robust estimation: When an unknown (1−ε)-fraction of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> are chosen from a sub-Gaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε<sup>1−1/<i>t</i></sup> in time <i>k</i><sup><i>O</i>(<i>t</i>)</sup>, so long as sub-Gaussian-ness up to <i>O</i>(<i>t</i>) moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than ε<sup>1/2</sup>. As a corollary, we achieve similar results for robust covariance estimation. \n Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> ∈ ℝ<sup><i>k</i></sup> for large <i>k</i> and <i>n</i> = poly(<i>k</i>) with the promise that a subset of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> were sampled from a probability distribution with bounded moments, recover some information about that distribution.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "b5e762a71f0b65c099410e081865d8cb",
"text": "In this paper we discuss a notation to describe task models, which can specify a wide range of temporal relationships among tasks. It is a compact and graphical notation, immediate both to use and understand. Its logical structure and the related automatic tool make it suitable for designing even large sized applications.",
"title": ""
},
{
"docid": "4ddad3c97359faf4b927167800fe77be",
"text": "Micro-expressions are facial expressions which are fleeting and reveal genuine emotions that people try to conceal. These are important clues for detecting lies and dangerous behaviors and therefore have potential applications in various fields such as the clinical field and national security. However, recognition through the naked eye is very difficult. Therefore, researchers in the field of computer vision have tried to develop micro-expression detection and recognition algorithms but lack spontaneous micro-expression databases. In this study, we attempted to create a database of spontaneous micro-expressions which were elicited from neutralized faces. Based on previous psychological studies, we designed an effective procedure in lab situations to elicit spontaneous micro-expressions and analyzed the video data with care to offer valid and reliable codings. From 1500 elicited facial movements filmed under 60fps, 195 micro-expressions were selected. These samples were coded so that the first, peak and last frames were tagged. Action units (AUs) were marked to give an objective and accurate description of the facial movements. Emotions were labeled based on psychological studies and participants' self-report to enhance the validity.",
"title": ""
},
{
"docid": "bc1f7e30b8dcef97c1d8de2db801c4f6",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7b239e83dea095bad2229d66596982c5",
"text": "In this paper, we discuss the application of concept of data quality to big data by highlighting how much complex is to define it in a general way. Already data quality is a multidimensional concept, difficult to characterize in precise definitions even in the case of well-structured data. Big data add two further dimensions of complexity: (i) being “very” source specific, and for this we adopt the interesting UNECE classification, and (ii) being highly unstructured and schema-less, often without golden standards to refer to or very difficult to access. After providing a tutorial on data quality in traditional contexts, we analyze big data by providing insights into the UNECE classification, and then, for each type of data source, we choose a specific instance of such a type (notably deep Web data, sensor-generated data, and Twitters/short texts) and discuss how quality dimensions can be defined in these cases. The overall aim of the paper is therefore to identify further research directions in the area of big data quality, by providing at the same time an up-to-date state of the art on data quality.",
"title": ""
},
{
"docid": "0aaed4206e4155c1535357a98a3d5119",
"text": "OBJECTIVES\nTo investigate the appearance, location and morphology of mandibular lingual foramina (MLF) in the Chinese Han population using cone beam computed tomography (CBCT).\n\n\nMETHODS\nCBCT images of the mandibular body in 200 patients (103 female patients and 97 male patients, age range 10-70 years) were retrospectively analysed to identify MLF. The canal number, location and direction were assessed. Additionally, the diameter of the lingual foramen, the distance between the alveolar crest and the lingual foramen, the distance between the tooth apex and the lingual foramen and the distance from the mandibular border to the lingual foramen were examined to describe the MLF characteristics. Gender and age differences with respect to foramina were also studied.\n\n\nRESULTS\nCBCT can be utilized to visualise lingual foramina. In this study, 683 lingual foramina were detected in 200 CBCT scans, with 538 (78.77%) being ≤1 mm in diameter and 145 (21.23%) being >1 mm. In total, 85.07% of MLF are median lingual canals (MLC) and 14.93% are lateral lingual canals (LLC). Two typical types of lingual foramina were identified according to their relationship with the tooth apex. Most lingual foramina (74.08%) were found below the tooth apex, and those above the tooth apex were much smaller in diameter. Male patients had statistically larger lingual foramina. The distance between the lingual foramen and the tooth apex changed with increasing age.\n\n\nCONCLUSIONS\nDetermination of the presence, position and size of lingual foramina is important before performing a surgical procedure. Careful implant-prosthetic treatment planning is particularly important in male and/or elderly patients because of the structural characteristics of their lingual foramina.",
"title": ""
},
{
"docid": "bc269e27e99f8532c7bd41b9ad45ac9a",
"text": "There are millions of users who tag multimedia content, generating a large vocabulary of tags. Some tags are frequent, while other tags are rarely used following a long tail distribution. For frequent tags, most of the multimedia methods that aim to automatically understand audio-visual content, give excellent results. It is not clear, however, how these methods will perform on rare tags. In this paper we investigate what social tags constitute the long tail and how they perform on two multimedia retrieval scenarios, tag relevance and detector learning. We show common valuable tags within the long tail, and by augmenting them with semantic knowledge, the performance of tag relevance and detector learning improves substantially.",
"title": ""
},
{
"docid": "770ec09c3a1da31ca983ad4398a7d5d0",
"text": "Plant growth and productivity are often limited by high root-zone temperatures (RZT) which restricts the growth of subtropical and temperate crops in the tropics. High RZT temperature coupled with low growth irradiances during cloudy days which mainly lead to poor root development and thus causes negative impact on the mineral uptake and assimilation. However, certain subtropical and temperate crops have successfully been grown aeroponically in the tropics by simply cooling their roots while their aerial portions are subjected to hot fluctuating ambient temperatures. This review first discusses the effects of RZT and growth irradiance on root morphology and its biomass, the effect of RZT on uptake and transport of several macro nutrients such as N [nitrogen, mainly nitrate, (NO3 )], P (H2PO4 , phosphate), K (potassium) and Ca (calcium), and micro nutrient Fe (iron) under different growth irradiances. The impact of RZT and growth irradiance on the assimilation of NO3 (the form of N nutrient given to the aeroponically grown plants) and the site of NO3 assimilation are also addressed. _____________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "6b4b33878553d4b36a583b56c9b13c02",
"text": "BACKGROUND\nIn this study we investigated gastrointestinal (GI) bleeding and its relationship to arteriovenous malformations (AVMs) in patients with the continuous-flow HeartMate II (HMII) left ventricular assist device (LVAD).\n\n\nMETHODS\nThe records of 172 patients who received HMII support between November 2003 and June 2010 were reviewed. Patients were considered to have GI bleeding if they had 1 or more of the following symptoms: guaiac-positive stool; hematemesis; melena; active bleeding at the time of endoscopy or colonoscopy; and blood within the stomach at endoscopy or colonoscopy. The symptom(s) had to be accompanied by a decrease of >1 g/dl in the patient's hemoglobin level. The location of the bleeding was identified as upper GI tract, lower GI tract or both according to esophagogastroduodenoscopy, colonoscopy, small-bowel enteroscopy or mesenteric angiography. Post-LVAD implantation anti-coagulation therapy consisted of warfarin, aspirin and dipyridamole.\n\n\nRESULTS\nThirty-two of the 172 patients (19%) had GI bleeding after 63 ± 62 (range 8 to 241) days of HMII support. Ten patients had GI bleeding from an AVM; these included 3 patients who had 2 bleeding episodes and 2 patients who had 5 episodes each. Sixteen patients had upper GI bleeding (10 hemorrhagic gastritis, 4 gastric AVM, 2 Mallory-Weiss syndrome), 15 had lower GI bleeding (6 diverticulosis, 6 jejunal AVM, 1 drive-line erosion of the colon, 1 sigmoid polyp, 1 ischemic colitis) and 1 had upper and lower GI bleeding (1 colocutaneous and gastrocutaneous fistula). All GI bleeding episodes were successfully managed medically.\n\n\nCONCLUSIONS\nArteriovenous malformations can cause GI bleeding in patients with continuous-flow LVADs. In all cases in this series, GI bleeding was successfully managed without the need for surgical intervention.",
"title": ""
},
{
"docid": "5b31efe9dc8e79d975a488c2b9084aea",
"text": "Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale - manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80% on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.",
"title": ""
},
{
"docid": "1b27922ab1693a15d230301c3a868afd",
"text": "Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT are computationally complex because of the repeated use of the forward and backward projection. Inspired by this success of deep learning in computer vision applications, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the texture are not fully recovered, which was unfamiliar to the radiologists. To cope with this problem, here we propose a direct residual learning approach on directional wavelet domain to solve this problem and to improve the performance against previous work. In particular, the new network estimates the noise of each input wavelet transform, and then the de-noised wavelet coefficients are obtained by subtracting the noise from the input wavelet transform bands. The experimental results confirm that the proposed network has significantly improved performance, preserving the detail texture of the original images.",
"title": ""
},
{
"docid": "bb7ba369cd3baf1f5ba26aef7b5574fb",
"text": "Static computer vision techniques enable non-intrusive observation and analysis of biometrics such as eye blinks. However, ambiguous eye behaviors such as partial blinks and asymmetric eyelid movements present problems that computer vision techniques relying on static appearance alone cannot solve reliably. Image flow analysis enables reliable and efficient interpretation of these ambiguous eye blink behaviors. In this paper we present a method for using image flow analysis to compute problematic eye blink parameters. The flow analysis produces the magnitude and direction of the eyelid movement. A deterministic finite state machine uses the eyelid movement data to compute blink parameters (e.g., blink count, blink rate, and other transitional statistics) for use in human computer interaction applications across a wide range of disciplines. We conducted extensive experiments employing this method on approximately 750K color video frames of five subjects",
"title": ""
},
{
"docid": "95a376ec68ac3c4bd6b0fd236dca5bcd",
"text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.",
"title": ""
},
{
"docid": "de6cd32ceadfd5f4ddd0a20cc4ce36e1",
"text": "This paper presents a novel time-domain algorithm for detecting and attenuating the acoustic effect of wind noise in speech signals originating from mobile terminals. The detection part makes use of metrics that exploits the properties of the spectral envelop of wind noise as well as its non-periodic and non-harmonic nature. LPC analyses of various orders are carried out and the results used to distinguish between wind and speech frames and to estimate the magnitude and location of the wind noise ‘resonance’. The suppression part entails constructing a parameterized postfilter of an appropriate order having a ‘null’ where the wind noise ‘resonance’ is. Wind-only frames are used to estimate the wind noise energy, from which the emphasis parameters of the post-filter are adjusted to provide an appropriate attenuation. The proposed scheme may be combined with background-noise suppression algorithms, or with speech-formant-enhancing post-filters in the context of a speech codec.",
"title": ""
}
] |
scidocsrr
|
b1079c497e8765bc1dbab6256b95a62f
|
CERN: Confidence-Energy Recurrent Network for Group Activity Recognition
|
[
{
"docid": "05a788c8387e58e59e8345f343b4412a",
"text": "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.",
"title": ""
},
{
"docid": "5f77e21de8f68cba79fc85e8c0e7725e",
"text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.",
"title": ""
}
] |
[
{
"docid": "acd93c6b041a975dcf52c7bafaf05b16",
"text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.",
"title": ""
},
{
"docid": "62d9add3a14100d57fc9d1c1342029e3",
"text": "A multidimensional access method offering significant performance increases by intelligently partitioning the query space is applied to relational database management systems (RDBMS). We introduce a formal model for multidimensional partitioned relations and discuss several typical query patterns. The model identifies the significance of multidimensional range queries and sort operations. The discussion of current access methods gives rise to the need for a multidimensional partitioning of relations. A detailed analysis of space partitioning focussing especially on Z-ordering illustrates the principle benefits of multidimensional indexes. After describing the UB-Tree and its standard algorithms for insertion, deletion, point queries, and range queries, we introduce the spiral algorithm for nearest neighbor queries with UB-Trees and the Tetris algorithm for efficient access to a table in arbitrary sort order. We then describe the complexity of the involved algorithms and give solutions to selected algorithmic problems for a prototype implementation of UB-Trees on top of several RDBMSs. A cost model for sort operations with and without range restrictions is used both for analyzing our algorithms and for comparing UB-Trees with state-of-the-art query processing. Performance comparisons with traditional access methods practically confirm the theoretically expected superiority of UB-Trees and our algorithms over traditional access methods: Query processing in RDBMS is accelerated by several orders of magnitude, while the resource requirements in main memory space and disk space are substantially reduced. Benchmarks on some queries of the TPC-D benchmark as well as the data warehousing scenario of a fruit juice company illustrate the potential impact of our work on relational algebra, SQL, and commercial applications. The results of this thesis were developed by the author managing the MISTRAL project, a joint research and development project with SAP AG (Germany), Teijin Systems Technology Ltd. (Japan), NEC (Japan), Hitachi (Japan), Gesellschaft für Konsumforschung (Germany), and TransAction Software GmbH (Germany).",
"title": ""
},
{
"docid": "7f5b31d805d4519688bcd9b8581f0f3a",
"text": "Special features such as ridges, valleys and silhouettes, of a polygonal scene are usually displayed by explicitly identifying and then rendering `edges' for the corresponding geometry. The candidate edges are identified using the connectivity information, which requires preprocessing of the data. We present a non-obvious but surprisingly simple to implement technique to render such features without connectivity information or preprocessing. At the hardware level, based only on the vertices of a given flat polygon, we introduce new polygons, with appropriate color, shape and orientation, so that they eventually appear as special features.",
"title": ""
},
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
},
{
"docid": "337b03633afacc96b443880ad996f013",
"text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.",
"title": ""
},
{
"docid": "f3ec87229acd0ec98c044ad42fd9fec1",
"text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"title": ""
},
{
"docid": "1e59d0a96b5b652a9a1f9bec77aac29e",
"text": "BACKGROUND\n2015 was the target year for malaria goals set by the World Health Assembly and other international institutions to reduce malaria incidence and mortality. A review of progress indicates that malaria programme financing and coverage have been transformed since the beginning of the millennium, and have contributed to substantial reductions in the burden of disease.\n\n\nFINDINGS\nInvestments in malaria programmes increased by more than 2.5 times between 2005 and 2014 from US$ 960 million to US$ 2.5 billion, allowing an expansion in malaria prevention, diagnostic testing and treatment programmes. In 2015 more than half of the population of sub-Saharan Africa slept under insecticide-treated mosquito nets, compared to just 2 % in 2000. Increased availability of rapid diagnostic tests and antimalarial medicines has allowed many more people to access timely and appropriate treatment. Malaria incidence rates have decreased by 37 % globally and mortality rates by 60 % since 2000. It is estimated that 70 % of the reductions in numbers of cases in sub-Saharan Africa can be attributed to malaria interventions.\n\n\nCONCLUSIONS\nReductions in malaria incidence and mortality rates have been made in every WHO region and almost every country. However, decreases in malaria case incidence and mortality rates were slowest in countries that had the largest numbers of malaria cases and deaths in 2000; reductions in incidence need to be greatly accelerated in these countries to achieve future malaria targets. Progress is made challenging because malaria is concentrated in countries and areas with the least resourced health systems and the least ability to pay for system improvements. Malaria interventions are nevertheless highly cost-effective and have not only led to significant reductions in the incidence of the disease but are estimated to have saved about US$ 900 million in malaria case management costs to public providers in sub-Saharan Africa between 2000 and 2014. Investments in malaria programmes can not only reduce malaria morbidity and mortality, thereby contributing to the health targets of the Sustainable Development Goals, but they can also transform the well-being and livelihood of some of the poorest communities across the globe.",
"title": ""
},
{
"docid": "bffd767503e0ab9627fc8637ca3b2efb",
"text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.",
"title": ""
},
{
"docid": "c93c0966ef744722d58bbc9170e9a8ab",
"text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.",
"title": ""
},
{
"docid": "75b075bb5f125031d30361f07dbafb65",
"text": "Real world prediction problems often involve the simultaneous prediction of multiple target variables using the same set of predictive variables. When the target variables are binary, the prediction task is called multi-label classification while when the target variables are realvalued the task is called multi-target regression. Although multi-target regression attracted the attention of the research community prior to multi-label classification, the recent advances in this field motivate a study of whether newer state-of-the-art algorithms developed for multilabel classification are applicable and equally successful in the domain of multi-target regression. In this paper we introduce two new multitarget regression algorithms: multi-target stacking (MTS) and ensemble of regressor chains (ERC), inspired by two popular multi-label classification approaches that are based on a single-target decomposition of the multi-target problem and the idea of treating the other prediction targets as additional input variables that augment the input space. Furthermore, we detect an important shortcoming on both methods related to the methodology used to create the additional input variables and develop modified versions of the algorithms (MTSC and ERCC) to tackle it. All methods are empirically evaluated on 12 real-world multi-target regression datasets, 8 of which are first introduced in this paper and are made publicly available for future benchmarks. The experimental results show that ERCC performs significantly better than both a strong baseline that learns a single model for each target using bagging of regression trees and the state-of-the-art multi-objective random forest approach. Also, the proposed modification results in significant performance gains for both MTS and ERC.",
"title": ""
},
{
"docid": "ae3dda04efd601d8be361c6a32ec7bcc",
"text": "Many large-scale machine learning (ML) applications use iterative algorithms to converge on parameter values that make the chosen model fit the input data. Often, this approach results in the same sequence of accesses to parameters repeating each iteration. This paper shows that these repeating patterns can and should be exploited to improve the efficiency of the parallel and distributed ML applications that will be a mainstay in cloud computing environments. Focusing on the increasingly popular \"parameter server\" approach to sharing model parameters among worker threads, we describe and demonstrate how the repeating patterns can be exploited. Examples include replacing dynamic cache and server structures with static pre-serialized structures, informing prefetch and partitioning decisions, and determining which data should be cached at each thread to avoid both contention and slow accesses to memory banks attached to other sockets. Experiments show that such exploitation reduces per-iteration time by 33--98%, for three real ML workloads, and that these improvements are robust to variation in the patterns over time.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "63fef6099108f7990da0a7687e422e14",
"text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.",
"title": ""
},
{
"docid": "20d754528009ebce458eaa748312b2fe",
"text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.",
"title": ""
},
{
"docid": "f3a4f5bd47e978d3c74aa5dbfe93f9f9",
"text": "We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-ofspeech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V2) that is four times larger than the (unlabeled) TWEEBANK V1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-ofthe-art on other treebanks in both accuracy and speed.",
"title": ""
},
{
"docid": "7d05958787d0f7a510aab1109c97b502",
"text": "The purpose of this review is to gain more insight in the neuropathology of pathological gambling and problem gambling, and to discuss challenges in this research area. Results from the reviewed PG studies show that PG is more than just an impulse control disorder. PG seems to fit very well with recent theoretical models of addiction, which stress the involvement of the ventral tegmental-orbito frontal cortex. Differentiating types of PG on game preferences (slot machines vs. casino games) seems to be useful because different PG groups show divergent results, suggesting different neurobiological pathways to PG. A framework for future studies is suggested, indicating the need for hypothesis driven pharmacological and functional imaging studies in PG and integration of knowledge from different research areas to further elucidate the neurobiological underpinnings of this disorder. Cognitive and neuroimaging findings in pathological gambling",
"title": ""
},
{
"docid": "981da4eddfc1c9fbbceef437f5f43439",
"text": "A significant number of schizophrenic patients show patterns of smooth pursuit eye-tracking patterns that differ strikingly from the generally smooth eye-tracking seen in normals and in nonschizophrenic patients. These deviations are probably referable not only to motivational or attentional factors, but also to oculomotor involvement that may have a critical relevance for perceptual dysfunction in schizophrenia.",
"title": ""
},
{
"docid": "5680f69d9f93c2def5f3a0cb5854b1d4",
"text": "Heart rate (HR) monitoring is necessary for daily healthcare. Wrist-type photoplethsmography (PPG) is a convenient and non-invasive technique for HR monitoring. However, motion artifacts (MA) caused by subjects' movements can extremely interfere the results of HR monitoring. In this paper, we propose a high accuracy method using motion decision, singular spectrum analysis (SSA) and spectral peak searching for daily HR estimation. The proposed approach was evaluated on 8 subjects under a series of different motion states. Compared with electrocardiogram (ECG) recorded simultaneously, the experimental results indicated that the averaged absolute estimation error was 2.33 beats per minute (BPM).",
"title": ""
},
{
"docid": "fcda8929585bc0e27e138070674dc455",
"text": "Also referred to as Gougerot-Carteaud syndrome, confluent and reticulated papillomatosis (CARP) is an acquired keratinization disorder of uncertain etiology. Clinically, it is typically characterized by symptomless, grayish-brown, scaly, flat papules that coalesce into larger patches with a reticular pattern at the edges. Sites most commonly affected include the anterior and/or posterior upper trunk region [1–3]. Although its clinical diagnosis is usually straightforward, the distinction from similar pigmentary dermatoses may sometimes be challenging, especially in case of lesions occurring in atypical locations [1–3]. In recent years, dermatoscopy has been shown to be useful in the clinical diagnosis of several “general” skin disorders, thus reducing the number of cases requiring biopsy [4–8]. The objective of the present study was to describe the dermatoscopic features of CARP in order to facilitate its noninvasive diagnosis. Eight individuals (3 women/5 men; mean age 29.2 years, range 18–51 years; mean disease duration 3 months, range 1–9 months) with CARP (diagnosed on the basis of histological findings and clinical criteria) [1] were included in the study. None of the patients had been using systemic or topical therapies for at least six weeks. In each patient, a handheld noncontact polarized dermatoscope (DermLite DL3 × 10; 3 Gen, San Juan Capistrano, CA, USA) equipped with a camera (Coolpix® 4500 Nikon Corporation, Melville, NY, USA) was used to take a dermatoscopic picture of a single target lesion (flat desquamative papule). All pictures were evaluated for the presence of specific morphological patterns by two of the authors (EE, GS). In all cases (100.0 %), we observed the same findings: fine whitish scaling as well as homogeneous, brownish, more or less defined, flat polygonal globules separated by whitish/ pale striae, thus creating a cobblestone pattern (Figure 1a, b). The shade of the flat globules was dark-brown (Figure 1a) in five (62.5 %) and light-brown (Figure 1b) in three (37.5 %) cases. To the best of our knowledge, there has only been one previous publication on dermatoscopy of CARP. In that particular study, findings included superficial white scales (likely due to parakeratosis and compact hyperkeratosis), brownish pigmentation with poorly defined borders (thought to correspond to hyperpigmentation of the basal layer), and a pattern of “sulci and gyri” (depressions and elevations, presumably as a result of papillomatosis) [9]. In the present study, we were able to confirm some of the aforementioned findings (white scaling and brownish pigmentation), however, the brownish areas in our patients consistently showed a cobblestone pattern (closely aggregated, squarish/polygonal, flat globules). This peculiar aspect could be due to the combination of basal hyperpigmentation, acanthosis, and papillomatosis, with relative sparing of the normal network of furrows of the skin surface. Accordingly, one might speculate that the different pattern of pigmentation found in the previous study might have resulted from the disruption of these physiological grooves due to more pronounced/irregular acanthosis/ papillomatosis. Remarkably, the detection of fine whitish scaling and brownish areas in a cobblestone or “sulci and gyri” pattern might be useful in distinguishing CARP from its differential diagnoses [10] (Table 1). These primarily include 1) tinea (pityriasis) versicolor, which is characterized by a pigmented network composed of brownish stripes and fine scales [11],",
"title": ""
},
{
"docid": "c9ff6e6c47b6362aaba5f827dd1b48f2",
"text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.",
"title": ""
}
] |
scidocsrr
|
bc021196c2f478256d62607529102dec
|
Micro-Expression Recognition Using Robust Principal Component Analysis and Local Spatiotemporal Directional Features
|
[
{
"docid": "1451c145b1ed5586755a2c89517a582f",
"text": "A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.",
"title": ""
}
] |
[
{
"docid": "c97ffa009af202f324a57b0a06ab1900",
"text": "Decades of research and more than 20 randomized controlled trials show that Virtual Reality exposure therapy (VRET) is effective in reducing fear and anxiety. Unfortunately, few providers or patients have had access to the costly and technical equipment previously required. Recent technological advances in the form of consumer Virtual Reality (VR) systems (e.g. Oculus Rift and Samsung Gear), however, now make widespread use of VRET in clinical settings and as self-help applications possible. In this literature review, we detail the current state of VR technology and discuss important therapeutic considerations in designing self-help and clinician-led VRETs, such as platform choice, exposure progression design, inhibitory learning strategies, stimuli tailoring, gamification, virtual social learning and more. We illustrate how these therapeutic components can be incorporated and utilized in VRET applications, taking full advantage of the unique capabilities of virtual environments, and showcase some of these features by describing the development of a consumer-ready, gamified self-help VRET application for low-cost commercially available VR hardware. We also raise and discuss challenges in the planning, development, evaluation, and dissemination of VRET applications, including the need for more high-quality research. We conclude by discussing how new technology (e.g. eye-tracking) can be incorporated into future VRETs and how widespread use of VRET self-help applications will enable collection of naturalistic \"Big Data\" that promises to inform learning theory and behavioral therapy in general.",
"title": ""
},
{
"docid": "2fc8918896f02d248597b5950fc33857",
"text": "This paper investigates the design and implementation of a finger-like robotic structure capable of reproducing human hand gestural movements performed by a multi-fingered, hand-like structure. In this work, we present a pneumatic circuit and a closed-loop controller for a finger-like soft pneumatic actuator. Experimental results demonstrate the performance of the pneumatic and control systems of the soft pneumatic actuator, and its ability to track human movement trajectories with affective content.",
"title": ""
},
{
"docid": "ca865ac35d84f0f3c4a4dec15f6c6916",
"text": "As information and communicatio n technologies become a critical component of firms’ infrastructure s and information establishe s itself as a key business resource as well asdriver,peoplestart torealisethat there is more than the functionalit y of the new information systemsthat is significant. Business or organisational transactions over new media require stability , one factor of which is information security. Information systems development practices have changed in line with the evolution of technology offerings as well as the nature of systems developed. Nevertheless , as this paper establishes , most contemporary development practices do not accommodate sufficientl y security concerns. Beyond the literature evidence, reports on empirical study results indicating that practitioners deal with security issuesbyapplyingconventional risk analysis practices after the system is developed. Addresses the lack of a defined discipline for security concerns integration in systems development by using field study results recording development practicesthatarecurrently inuseto illustratetheir deficiencies ,to point to required enhancements of practice and to propose a list of desired features that contemporary development practices should incorporate to address security concerns. This work has been supported in part by the Ministry of Development, Hellenic Secretariat for Research and Development, through programme YPER97. standards such as the Capability Maturity Model (CMM), could strengthen the initiatives of organisations for IS assurance and security. Quality certification is not directly linked to security aspects, but there are strong relating requirements for: duty separation, job descriptions and document control; and validity and availability of documentation and forms. Fillery and Chantler (1994) view this as a problematic situation and argue that lack of quality procedures and assurance in IT production is the heart of the problem and dealing with this issue is essential for embedding worthy security features in them. Information security problems in contemporary product/component-oriented development practices could be resolved in the context of quality assurance, since each single product could be validated and assured properly. The validation of products along with process assurance is the heart of the holistic proposal from Eloff and Von Solms (2000), which exploits this trend and combines both system components and system processes. The previous proposals address the importance of information security in the modern world and the need to be integrated in systems development, but still, they do not refer explicitly to the changes that need to be introduced in development practices, actors and scenarios. Furthermore, assurance of high-quality development cannot by itself ensure security, as even the perfect product could be subject of misuse. Thus, this paper sets off to address the following questions: 1 What do practitioners and developers do when they face requirements for secure systems development, how do they integrate security concerns to their development processes? And in particular: Do they, and if so how do they, implement security solutions early in the development processes? How do the implemented solutions get integrated to the overall organisational structure and everyday life of a particular organisational environment? 2 What perceptions of security do the involved IS stakeholders have? What implications do those perceptions have in the development of secure systems? Information security is a field combining technical aspects as well as social, cultural and legal/regulatory concerns. Most of the attempts to resolve security problems are focused largely upon the technological component, ignoring important contextual `̀ soft’’ parameters, enlarging in this way the gap between real world’s requirements for secure systems and the means to achieve it. Development approaches In the past the information systems’ boundaries within the organisation were quite clear and limited. As business process support tools their essence and structure were stable, confined first to automation of basic transaction processing, to explode then in a multiplicity of forms of management support tools. The basic tenets of systems development theory and practice were that systems could be produced in a specific and (in theory) standardised way, proceeding linearly through well defined stages, including a substantial programming effort, intersected by inspection and feedback milestones. The systems engineering paradigm is encapsulated in most methodologies available (Avison and Fitzgerald, 1993). Nevertheless, the `̀ profile’’ of systems development approaches has undergone fundamental changes along with the evolution of technology, as well as the nature of systems found in today’s enterprises. In essence, the traditional approach, based on the life cycle concept for a systems project, cannot capture the extensive use of commercially available application platforms as a basis for new systems development. Moreover, the vast variety of commercially available software products that can be combined to reach the required functionality for a particular system makes component-based development a realistic option. In all, systems development `̀ from scratch’’ is far less practiced compared to ten years ago. The information systems literature, in which the methodologies movement flourished in the 1980s and early 1990s, has not addressed sufficiently the new norms of practice. In this paper, we introduce a rudimentary classification of contemporary systems development practices along the well-known `̀ make or buy’’ divide. Most systems projects are now anchored on the `̀ buy’’ maxim; there we introduce two development approaches, namely single product based and componentbased development. On the `̀ make’’ side we have proprietary development. We argue that each of these three approaches introduces different challenges to developers regarding security concerns. When the IS department could not allocate the requested human resources, or its [ 184] T. Tryfonas, E. Kiountouzis and A. Poulymenakou Embedding security practices in contemporary information systems development approaches Information Management & Computer Security 9/4 [2001] 183±197 resources did not qualify in terms of experience and know-how for a specific development project, a ready-made system could have been purchased. At such times a system could be considered an accounting package and the computer in which this resided. Therefore the scope of a ready-made solution could vary from a single package to a comprehensive IS. In the case that more than one application was chosen and eventually operating, it was practically very difficult to integrate them so as to produce a unified and comprehensive organisational information management solution. The composition of the technical infrastructure was made from ready packages in whose operational philosophy should the organisation fit-in. This is a major contradiction to in-house development of a solution, which leads to systems tailored to a particular organisation’s needs and character. But contemporary development practices managed to soft-pedal this contradiction. Single-product based (or configuration) development The evolution of the technological infrastructure, the knowledge that evolved about systems development and results from the successful, or not, application of a number of ready-made systems, led to the creation and institution of basic parts of technology solutions (core systems), that, deployed with a proper configuration, could fit-in to any organisational environment (Fitzgerald, 1998). This practice, termed as single-product based development or configuration development, has led to the development of information technologies that implement a core functionality of an organisation and are customisable (can be properly configured) depending on the environment. Examples of such systems are the enterprise resource planning (ERP) systems such as the SAP, BAAN and PeopleSoft product suites. Single-product systems enable the standardisation of business processes and also facilitate the difficult task of communication of horizontal applications within a vertical sector (Vasarhelyi and Kogan, 1999). Component-based development Market requirements for successful and on-time delivered systems that can face the rapid changing contemporary business and social reality led to a second popular practice of developing systems, the component-based development. Development efforts are allocated to relatively independent sectors/ sub-systems of a manageable size that could possibly be implemented by different and topologically distributed teams. Monolithic systems are abandoned as they fail to meet the needs of modern business processes and to be delivered on time. By following this practice, one can schedule earlier the delivery of the critical system’s components and in general, all components can be arranged and scheduled to be delivered in terms of their priority (Clements, 1995). In addition, basic user requirements can be met from the beginning of the development effort, so that an organisation could take advantage of the system before this is fully implemented and deployed. This principle empowers rapid application development (RAD) practices that substitute monolithic application development with modularised, componentbased systems, the components of which have been properly evaluated by the system’s endusers and domain experts. There is a general impression that the conventional linear lifecycle of monolithic systems cannot contribute anymore to the development of successful systems and their on-time delivery (Howard et al., 1999). Proprietary development Organisations’ informational needs were defined in the past by the relatively small requirements for automation of their processes. The main objective was compute",
"title": ""
},
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "edeb56280e9645133b8ffbf40bcd9287",
"text": "The design, architecture and VLSI implementation of an image compression algorithm for high-frame rate, multi-view wireless endoscopy is presented. By operating directly on Bayer color filter array image the algorithm achieves both high overall energy efficiency and low implementation cost. It uses two-dimensional discrete cosine transform to decorrelate image values in each $$4\\times 4$$ 4 × 4 block. Resulting coefficients are encoded by a new low-complexity yet efficient entropy encoder. An adaptive deblocking filter on the decoder side removes blocking effects and tiling artifacts on very flat image, which enhance the final image quality. The proposed compressor, including a 4 KB FIFO, a parallel to serial converter and a forward error correction encoder, is implemented in 180 nm CMOS process. It consumes 1.32 mW at 50 frames per second (fps) and only 0.68 mW at 25 fps at 3 MHz clock. Low silicon area 1.1 mm $$\\times$$ × 1.1 mm, high energy efficiency (27 $$\\upmu$$ μ J/frame) and throughput offer excellent scalability to handle image processing tasks in new, emerging, multi-view, robotic capsules.",
"title": ""
},
{
"docid": "2a509254ce4f91646645b3eb0b745d3d",
"text": "According to attention restoration theory, directed attention can become fatigued and then be restored by spending time in a restorative environment. This study examined the restorative effects of nature on children’s executive functioning. Sevento 8-year-olds (school aged, n = 34) and 4to 5-year-olds (preschool, n = 33) participated in two sessions in which they completed an activity to fatigue attention, then walked along urban streets (urban walk) in one session and in a park-like area (nature walk) in another session, and finally completed assessments of working memory, inhibitory control, and attention. Children responded faster on the attention task after a nature walk than an urban walk. School-aged children performed significantly better on the attention task than preschoolers following the nature walk, but not urban walk. Walk type did not affect inhibitory control or verbal working memory. However, preschoolers’ spatial working memory remained more stable following the nature walk than the urban walk.",
"title": ""
},
{
"docid": "ab66d7e267072432d1015e36260c9866",
"text": "Deep Neural Networks (DNNs) are the current state of the art for various tasks such as object detection, natural language processing and semantic segmentation. These networks are massively parallel, hierarchical models with each level of hierarchy performing millions of operations on a single input. The enormous amount of parallel computation makes these DNNs suitable for custom acceleration. Custom accelerators can provide real time inference of DNNs at low power thus enabling widespread embedded deployment. In this paper, we present Snowflake, a high efficiency, low power accelerator for DNNs. Snowflake was designed to achieve optimum occupancy at low bandwidths and it is agnostic to the network architecture. Snowflake was implemented on the Xilinx Zynq XC7Z045 APSoC and achieves a peak performance of 128 G-ops/s. Snowflake is able to maintain a throughput of 98 FPS on AlexNet while averaging 1.2 GB/s of memory bandwidth.",
"title": ""
},
{
"docid": "a4a5c6b94d5b377d13b521e3dbbf0d16",
"text": "We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning. Experiments on this dataset demonstrate that the performance of state-of-the-art MRC systems fall far behind human performance. ReCoRD represents a challenge for future research to bridge the gap between human and machine commonsense reading comprehension. ReCoRD is available at http://nlp.jhu.edu/record.",
"title": ""
},
{
"docid": "5f31121bf6b8412a84f8aa46763c4d40",
"text": "A novel Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry. The fractal bow-tie of each iterative is investigated without feedline in free space for fractal trait unveiling first, followed by detailed expansion upon the four-iterated pragmatic fractal bow-tie dipole fed by 50-Ω coaxial SMA connector through coplanar stripline (CPS) and comparison with Sierpinski gasket. The fractal bow-tie dipole can operate in multiband with moderate gain (3.5-7 dBi) and high efficiency (60%-80%), which is corresponding to certain shape parameters, such as notch ratio α, notch angle φ, and base angles θ of the isosceles triangle. Compared with conventional bow-tie dipole and Sierpinski gasket with the same size, this fractal-like antenna has almost the same operating properties in low frequency and better radiation pattern in high frequency in multi-band operation, which makes it a better candidate for applications of PCS, WLAN, WiFi, WiMAX, and other communication systems.",
"title": ""
},
{
"docid": "775cf5c9e160d8975b1652d404c590e0",
"text": "PURPOSE OF REVIEW\nWe provide an overview of the neurological condition known as visual snow syndrome. Patients affected by this chronic disorder suffer with a pan-field visual disturbance described as tiny flickering dots, which resemble the static noise of an untuned television.\n\n\nRECENT FINDINGS\nThe term 'visual snow' has only appeared in the medical literature very recently. The clinical features of the syndrome have now been reasonably described and the pathophysiology has begun to be explored. This review focuses on what is currently known about visual snow.\n\n\nSUMMARY\nRecent evidence suggests visual snow is a complex neurological syndrome characterized by debilitating visual symptoms. It is becoming better understood as it is systematically studied. Perhaps the most important unmet need for the condition is a sufficient understanding of it to generate and test hypotheses about treatment.",
"title": ""
},
{
"docid": "5b50e84437dc27f5b38b53d8613ae2c7",
"text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.",
"title": ""
},
{
"docid": "fb15647d528df8b8613376066d9f5e68",
"text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.",
"title": ""
},
{
"docid": "7346ce53235490f0eaf1ad97c7c23006",
"text": "With the growth in sociality and interaction around online news media, news sites are increasingly becoming places for communities to discuss and address common issues spurred by news articles. The quality of online news comments is of importance to news organizations that want to provide a valuable exchange of community ideas and maintain credibility within the community. In this work we examine the complex interplay between the needs and desires of news commenters with the functioning of different journalistic approaches toward managing comment quality. Drawing primarily on newsroom interviews and reader surveys, we characterize the comment discourse of SacBee.com, discuss the relationship of comment quality to both the consumption and production of news information, and provide a description of both readers' and writers' motivations for usage of news comments. We also examine newsroom strategies for dealing with comment quality as well as explore tensions and opportunities for value-sensitive innovation within such online communities.",
"title": ""
},
{
"docid": "76d1509549ba64157911e6b723f6ebc5",
"text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).",
"title": ""
},
{
"docid": "3663d877d157c8ba589e4d699afc460f",
"text": "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.",
"title": ""
},
{
"docid": "e4c2fc7244642b5858950f7c549e381e",
"text": "In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features. BCN is a simple network module that collects effective spatial features, embeds location information and broadcasts them to the entire feature maps. We further introduce the Multi-Relational Network (multiRN) that improves the existing Relation Network (RN) by utilizing the BCN module. In pixel-based relation reasoning problems, with the help of BCN, multiRN extends the concept of ‘pairwise relations’ in conventional RNs to ‘multiwise relations’ by relating each object with multiple objects at once. This yields in O(n) complexity for n objects, which is a vast computational gain from RNs that take O(n). Through experiments, multiRN has achieved a state-of-the-art performance on CLEVR dataset, which proves the usability of BCN on relation reasoning problems.",
"title": ""
},
{
"docid": "3e113df3164468bd67060822de9a647c",
"text": "BACKGROUND\nPrevious estimates of the prevalence of geriatric depression have varied. There are few large population-based studies; most of these focused on individuals younger than 80 years. No US studies have been published since the advent of the newer antidepressant agents.\n\n\nMETHODS\nIn 1995 through 1996, as part of a large population study, we examined the current and lifetime prevalence of depressive disorders in 4,559 nondemented individuals aged 65 to 100 years. This sample represented 90% of the elderly population of Cache County, Utah. Using a modified version of the Diagnostic Interview Schedule, we ascertained past and present DSM-IV major depression, dysthymia, and subclinical depressive disorders. Medication use was determined through a structured interview and a \"medicine chest inventory.\"\n\n\nRESULTS\nPoint prevalence of major depression was estimated at 4.4% in women and 2.7% in men (P= .003). Other depressive syndromes were surprisingly uncommon (combined point prevalence, 1.6%). Among subjects with current major depression, 35.7% were taking an antidepressant (mostly selective serotonin reuptake inhibitors) and 27.4% a sedative/hypnotic. The current prevalence of major depression did not change appreciably with age. Estimated lifetime prevalence of major depression was 20.4% in women and 9.6% in men (P<.001), decreasing with age.\n\n\nCONCLUSIONS\nThese estimates for prevalence of major depression are higher than those reported previously in North American studies. Treatment with antidepressants was more common than reported previously, but was still lacking in most individuals with major depression. The prevalence of subsyndromal depressive symptoms was low, possibly because of unusual characteristics of the population.",
"title": ""
},
{
"docid": "102a97a997d0fb7f2d013434a9468e38",
"text": "Avocado plant (Persea americana), a plant belonging to the family of Lauraceae and genus, persea bears fruit known as avocado pear or alligator pear that contains the avocado pear seed. Reported uses of avocado pear seed include use in the management of hypertension, diabetes, cancer and inflammation [7-9]. The fruit is known as ube oyibo (loosely translated to ‘foreign pear’) in Ojoto and neighboring Igbo speaking communities south east Nigeria [10]. Different parts of avocado pear were used in traditional medications for various purposes including as an antimicrobial [11,12]. That not withstanding, the avocado pear seeds are essentially discarded as agro-food wastes hence underutilized. Exploring the possible dietary and therapeutic potentials of especially underutilized agro-food wastes will in addition reduce the possible environmental waste burden [1315]. Thus, this study was warranted and aimed at assessing the proximate, functional, anti-nutrients and antimicrobial properties of avocado pear seed to provide basis for its possible dietary use and justification for its ethno-medicinal use. The objectives set to achieving the study aim as stated were by determining the proximate, functional, antinutrient and antimicrobial properties of avocado pear (Persea americana) seeds using standard methods as in the study design.",
"title": ""
},
{
"docid": "87e8b5b75b5e83ebc52579e8bbae04f0",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
},
{
"docid": "43233ce6805a50ed931ce319245e4f6b",
"text": "Currently the use of three-phase induction machines is widespread in industrial applications due to several methods available to control the speed and torque of the motor. Many applications require that the same torque be available at all revolutions up to the nominal value. In this paper two control methods are compared: scalar control and vector control. Scalar control is a relatively simple method. The purpose of the technique is to control the magnitude of the chosen control quantities. At the induction motor the technique is used as Volts/Hertz constant control. Vector control is a more complex control technique, the evolution of which was inevitable, too, since scalar control cannot be applied for controlling systems with dynamic behaviour. The vector control technique works with vector quantities, controlling the desired values by using space phasors which contain all the three phase quantities in one phasor. It is also known as field-oriented control because in the course of implementation the identification of the field flux of the motor is required. This paper reports on the changing possibilities of the revolution – torque characteristic curve, and demonstrates the results of the two control methods with simulations. The simulations and the applied equivalent circuit parameters are based on real measurements done with no load, with direct current and with locked-rotor.",
"title": ""
}
] |
scidocsrr
|
55e2dc25b7119ad55fec5cb1fee9e87f
|
Co-analysis of RAS Log and Job Log on Blue Gene/P
|
[
{
"docid": "f910996af5983cf121b7912080c927d6",
"text": "In large-scale networked computing systems, component failures become norms instead of exceptions. Failure prediction is a crucial technique for self-managing resource burdens. Failure events in coalition systems exhibit strong correlations in time and space domain. In this paper, we develop a spherical covariance model with an adjustable timescale parameter to quantify the temporal correlation and a stochastic model to describe spatial correlation. We further utilize the information of application allocation to discover more correlations among failure instances. We cluster failure events based on their correlations and predict their future occurrences. We implemented a failure prediction framework, called PREdictor of Failure Events Correlated Temporal-Spatially (hPREFECTs), which explores correlations among failures and forecasts the time-between-failure of future instances. We evaluate the performance of hPREFECTs in both offline prediction of failure by using the Los Alamos HPC traces and online prediction in an institute-wide clusters coalition environment. Experimental results show the system achieves more than 76% accuracy in offline prediction and more than 70% accuracy in online prediction during the time from May 2006 to April 2007.",
"title": ""
}
] |
[
{
"docid": "4ce6063786afa258d8ae982c7f17a8b1",
"text": "This paper proposes a hybrid phase-shift-controlled three-level (TL) and LLC dc-dc converter. The TL dc-dc converter and LLC dc-dc converter have their own transformers. Compared with conventional half-bridge TL dc-dc converters, the proposed one has no additional switch at the primary side of the transformer, where the TL converter shares the lagging switches with the LLC converter. At the secondary side of the transformers, the TL and LLC converters are connected by an active switch. With the aid of the LLC converter, the zero voltage switching (ZVS) of the lagging switches can be achieved easily even under light load conditions. Wide ZVS range for all the switches can be ensured. Both the circulating current at the primary side and the output filter inductance are reduced. Furthermore, the efficiency of the converter is improved dramatically. The features of the proposed converter are analyzed, and the design guidelines are given in the paper. Finally, the performance of the converter is verified by a 1-kW experimental prototype.",
"title": ""
},
{
"docid": "2274f3d3dc25bec4b86988615d421f10",
"text": "Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis.",
"title": ""
},
{
"docid": "688bacdee25152e1de6bcc5005b75d9a",
"text": "Data Mining provides powerful techniques for various fields including education. The research in the educational field is rapidly increasing due to the massive amount of students’ data which can be used to discover valuable pattern pertaining students’ learning behaviour. This paper proposes a framework for predicting students’ academic performance of first year bachelor students in Computer Science course. The data were collected from 8 year period intakes from July 2006/2007 until July 2013/2014 that contains the students’ demographics, previous academic records, and family background information. Decision Tree, Naïve Bayes, and Rule Based classification techniques are applied to the students’ data in order to produce the best students’ academic performance prediction model. The experiment result shows the Rule Based is a best model among the other techniques by receiving the highest accuracy value of 71.3%. The extracted knowledge from prediction model will be used to identify and profile the student to determine the students’ level of success in the first semester.",
"title": ""
},
{
"docid": "8c0f20061bd09b328748d256d5ece7cc",
"text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"title": ""
},
{
"docid": "d7d66f89e5f5f2d6507e0939933b3a17",
"text": "The discarded clam shell waste, fossil and edible oil as biolubricant feedstocks create environmental impacts and food chain dilemma, thus this work aims to circumvent these issues by using activated saltwater clam shell waste (SCSW) as solid catalyst for conversion of Jatropha curcas oil as non-edible sources to ester biolubricant. The characterization of solid catalyst was done by Differential Thermal Analysis-Thermo Gravimetric Analysis (DTATGA), X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), Field Emission Scanning Electron Microscopy (FESEM) and Fourier Transformed Infrared Spectroscopy (FTIR) analysis. The calcined catalyst was used in the transesterification of Jatropha oil to methyl ester as the first step, and the second stage was involved the reaction of Jatropha methyl ester (JME) with trimethylolpropane (TMP) based on the various process parameters. The formated biolubricant was analyzed using the capillary column (DB-5HT) equipped Gas Chromatography (GC). The conversion results of Jatropha oil to ester biolubricant can be found nearly 96.66%, and the maximum distribution composition mainly contains 72.3% of triester (TE). Keywords—Conversion, ester biolubricant, Jatropha curcas oil, solid catalyst.",
"title": ""
},
{
"docid": "00e5c92435378e4fdcee5f9fa58271b5",
"text": "Because the position transducers commonly used (optical encoders and electromagnetic resolvers) do not inherently produce a true, instantaneous velocity measurement, some signal processing techniques are generally used to estimate the velocity at each sample instant. This estimated signal is then used as the velocity feedback signal for the velocity loop control. An analysis is presented of the limitations of such approaches, and a technique which optimally estimates the velocity at each sample instant is presented. The method is shown to offer a significant improvement in command-driven systems and to reduce the effect of quantized angular resolution which limits the ultimate performance of all digital servo drives. The noise reduction is especially relevant for AC servo drives due to the high current loop bandwidths required for their correct operation. The method demonstrates improved measurement performance over a classical DC tachometer.<<ETX>>",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "0660dc780eda869aabc1f856ec3f193f",
"text": "This paper provides a study of the smart grid projects realised in Europe and presents their technological solutions with a focus on smart metering Low Voltage (LV) applications. Special attention is given to the telecommunications technologies used. For this purpose, we present the telecommunication technologies chosen by several European utilities for the accomplishment of their smart meter national roll-outs. Further on, a study is performed based on the European Smart Grid Projects, highlighting their technological options. The range of the projects analysed covers the ones including smart metering implementation as well as those in which smart metering applications play a significant role in the overall project success. The survey reveals that various topics are directly or indirectly linked to smart metering applications, like smart home/building, energy management, grid monitoring and integration of Renewable Energy Sources (RES). Therefore, the technological options that lie behind such projects are pointed out. For reasons of completeness, we also present the main characteristics of the telecommunication technologies that are found to be used in practice for the LV grid.",
"title": ""
},
{
"docid": "e6a97c3365e16d77642a84f0a80863e2",
"text": "The current statuses and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano-Things (IoNT) are extensively reviewed and a summarized survey is presented. The analysis clearly distinguishes between IoT and IoE, which are wrongly considered to be the same by many commentators. After evaluating the current trends of advancement in the fields of IoT, IoE and IoNT, this paper identifies the 21 most significant current and future challenges as well as scenarios for the possible future expansion of their applications. Despite possible negative aspects of these developments, there are grounds for general optimism about the coming technologies. Certainly, many tedious tasks can be taken over by IoT devices. However, the dangers of criminal and other nefarious activities, plus those of hardware and software errors, pose major challenges that are a priority for further research. Major specific priority issues for research are identified.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "30a6a3df784c2a8cc69a1bd75ad1998b",
"text": "Traditional stock market prediction approaches commonly utilize the historical price-related data of the stocks to forecast their future trends. As the Web information grows, recently some works try to explore financial news to improve the prediction. Effective indicators, e.g., the events related to the stocks and the people’s sentiments towards the market and stocks, have been proved to play important roles in the stocks’ volatility, and are extracted to feed into the prediction models for improving the prediction accuracy. However, a major limitation of previous methods is that the indicators are obtained from only a single source whose reliability might be low, or from several data sources but their interactions and correlations among the multi-sourced data are largely ignored. In this work, we extract the events from Web news and the users’ sentiments from social media, and investigate their joint impacts on the stock price movements via a coupled matrix and tensor factorization framework. Specifically, a tensor is firstly constructed to fuse heterogeneous data and capture the intrinsic ∗Corresponding author Email addresses: zhangx@bupt.edu.cn (Xi Zhang), 2011213120@bupt.edu.cn (Yunjia Zhang), szwang@nuaa.edu.cn (Senzhang Wang), yaoyuntao@bupt.edu.cn (Yuntao Yao), fangbx@bupt.edu.cn (Binxing Fang), psyu@uic.edu (Philip S. Yu) Preprint submitted to Journal of LTEX Templates September 2, 2018 ar X iv :1 80 1. 00 58 8v 1 [ cs .S I] 2 J an 2 01 8 relations among the events and the investors’ sentiments. Due to the sparsity of the tensor, two auxiliary matrices, the stock quantitative feature matrix and the stock correlation matrix, are constructed and incorporated to assist the tensor decomposition. The intuition behind is that stocks that are highly correlated with each other tend to be affected by the same event. Thus, instead of conducting each stock prediction task separately and independently, we predict multiple correlated stocks simultaneously through their commonalities, which are enabled via sharing the collaboratively factorized low rank matrices between matrices and the tensor. Evaluations on the China A-share stock data and the HK stock data in the year 2015 demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "563183ff51d1a218bf54db6400e25365",
"text": "In this paper wireless communication using white, high brightness LEDs (light emitting diodes) is considered. In particular, the use of OFDM (orthogonal frequency division multiplexing) for intensity modulation is investigated. The high peak-to-average ratio (PAR) in OFDM is usually considered a disadvantage in radio frequency transmission systems due to non-linearities of the power amplifier. It is demonstrated theoretically and by means of an experimental system that the high PAR in OFDM can be exploited constructively in visible light communication to intensity modulate LEDs. It is shown that the theoretical and the experimental results match very closely, and that it is possible to cover a distance of up to one meter using a single LED",
"title": ""
},
{
"docid": "c3bfe9b5231c5f9b4499ad38b6e8eac6",
"text": "As the World Wide Web has increasingly become a necessity in daily life, the acute need to safeguard user privacy and security has become manifestly apparent. After users realized that browser cookies could allow websites to track their actions without permission or notification, many have chosen to reject cookies in order to protect their privacy. However, more recently, methods of fingerprinting a web browser have become an increasingly common practice. In this paper, we classify web browser fingerprinting into four main categories: (1) Browser Specific, (2) Canvas, (3) JavaScript Engine, and (4) Cross-browser. We then summarize the privacy and security implications, discuss commercial fingerprinting techniques, and finally present some detection and prevention methods.",
"title": ""
},
{
"docid": "ff6b4840787027df75873f38fbb311b4",
"text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] |
scidocsrr
|
0c0dbdd3593239ff7941c8219d15c1bd
|
The topology of dark networks
|
[
{
"docid": "8afd1ab45198e9960e6a047091a2def8",
"text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.",
"title": ""
}
] |
[
{
"docid": "0bfad59874eb7a52c123bb6cd7bc1c16",
"text": "A 12-year-old patient sustained avulsions of both permanent maxillary central incisors. Subsequently, both teeth developed replacement resorption. The left incisor was extracted alio loco. The right incisor was treated by decoronation (removal of crown and pulp, but preservation of the root substance). Comparison of both sites demonstrated complete preservation of the height and width of the alveolar bone at the decoronation site, whereas the tooth extraction site showed considerable bone loss. In addition, some vertical bone apposition was found on top of the decoronated root. Decoronation is a simple and safe surgical procedure for preservation of alveolar bone prior to implant placement. It must be considered as a treatment option for teeth affected by replacement resorption if tooth transplantation is not feasible.",
"title": ""
},
{
"docid": "1ecf01e0c9aec4159312406368ceeff0",
"text": "Image phylogeny is the problem of reconstructing the structure that represents the history of generation of semantically similar images (e.g., near-duplicate images). Typical image phylogeny approaches break the problem into two steps: (1) estimating the dissimilarity between each pair of images and (2) reconstructing the phylogeny structure. Given that the dissimilarity calculation directly impacts the phylogeny reconstruction, in this paper, we propose new approaches to the standard formulation of the dissimilarity measure employed in image phylogeny, aiming at improving the reconstruction of the tree structure that represents the generational relationships between semantically similar images. These new formulations exploit a different method of color adjustment, local gradients to estimate pixel differences and mutual information as a similarity measure. The results obtained with the proposed formulation remarkably outperform the existing counterparts in the literature, allowing a much better analysis of the kinship relationships in a set of images, allowing for more accurate deployment of phylogeny solutions to tackle traitor tracing, copyright enforcement and digital forensics problems.",
"title": ""
},
{
"docid": "22881dd1a1a17441b3a914117e134a28",
"text": "Remote sensing of the reflectance photoplethysmogram using a video camera typically positioned 1 m away from the patient's face is a promising method for monitoring the vital signs of patients without attaching any electrodes or sensors to them. Most of the papers in the literature on non-contact vital sign monitoring report results on human volunteers in controlled environments. We have been able to obtain estimates of heart rate and respiratory rate and preliminary results on changes in oxygen saturation from double-monitored patients undergoing haemodialysis in the Oxford Kidney Unit. To achieve this, we have devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation. Secondly, we have been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model. In stable sections with minimal patient motion, the mean absolute error between the camera-derived estimate of heart rate and the reference value from a pulse oximeter is similar to the mean absolute error between two pulse oximeter measurements at different sites (finger and earlobe). The activities of daily living affect the respiratory rate, but the camera-derived estimates of this parameter are at least as accurate as those derived from a thoracic expansion sensor (chest belt). During a period of obstructive sleep apnoea, we tracked changes in oxygen saturation using the ratio of normalized reflectance changes in two colour channels (red and blue), but this required calibration against the reference data from a pulse oximeter.",
"title": ""
},
{
"docid": "da988486b0a3e82ce5f7fb8aa5467779",
"text": "The benefits of Domain Specific Modeling Languages (DSML), for modeling and design of cyber physical systems, have been acknowledged in previous years. In contrast to general purpose modeling languages, such as Unified Modeling Language, DSML facilitates the modeling of domain specific concepts. The objective of this work is to develop a simple graphical DSML for cyber physical systems, which allow the unified modeling of the structural and behavioral aspects of a system in a single model, and provide model transformation and design verification support in future. The proposed DSML was defined in terms of its abstract and concrete syntax. The applicability of the proposed DSML was demonstrated by its application in two case studies: Traffic Signal and Arbiter case studies. The results showed that the proposed DSML produce simple and unified models with possible model transformation and verification support.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "6a541e92e92385c27ceec1e55a50b46e",
"text": "BACKGROUND\nWe retrospectively studied the outcome of Pavlik harness treatment in late-diagnosed hip dislocation in infants between 6 and 24 months of age (Graf type 3 and 4 or dislocated hips on radiographs) treated in our hospital between 1984 and 2004. The Pavlik harness was progressively applied to improve both flexion and abduction of the dislocated hip. In case of persistent adduction contracture, an abduction splint was added temporarily to improve the abduction.\n\n\nMETHODS\nWe included 24 patients (26 hips) between 6 and 24 months of age who presented with a dislocated hip and primarily treated by Pavlik harness in our hospital between 1984 and 2004. The mean age at diagnosis was 9 months (range 6 to 23 mo). The average follow-up was 6 years 6 months (2 to 12 y). Ultrasound images and radiographs were assessed at the time of diagnosis, one year after reposition and at last follow-up.\n\n\nRESULTS\nTwelve of the twenty-six hips (46%) were successfully reduced with Pavlik harness after an average treatment of 14 weeks (4 to 28 wk). One patient (9%) needed a secondary procedure 1 year 9 months after reposition because of residual dysplasia (Pelvis osteotomy). Seventeen of the 26 hips were primary diagnosed by Ultrasound according to the Graf classification. Ten had a Graf type 3 hip and 7 hips were classified as Graf type 4. The success rate was 60% for the type 3 hips and 0% for the type 4 hips. (P=0.035). None of the hips that were reduced with the Pavlik harness developed an avascular necrosis (AVN). Of the hips that failed the Pavlik harness treatment, three hips showed signs of AVN, 1 after closed reposition and 2 after open reposition.\n\n\nCONCLUSION\nThe use of a Pavlik harness in the late-diagnosed hip dislocation type Graf 3 can be a successful treatment option in the older infant. We have noticed few complications in these patients maybe due to progressive and gentle increase of abduction and flexion, with or without temporary use of an abduction splint. The treatment should be abandoned if the hips are not reduced after 6 weeks. None of the Graf 4 hips could be reduced successfully by Pavlik harness. This was significantly different from the success rate for the Graf type 3 hips.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, clinical case series: Level IV.",
"title": ""
},
{
"docid": "eb639439559f3e4e3540e3e98de7a741",
"text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.",
"title": ""
},
{
"docid": "e4ade1f0baea7c50d0dff4470bbbfcd9",
"text": "Ad networks for mobile apps require inspection of the visual layout of their ads to detect certain types of placement frauds. Doing this manually is error prone, and does not scale to the sizes of today’s app stores. In this paper, we design a system called DECAF to automatically discover various placement frauds scalably and effectively. DECAF uses automated app navigation, together with optimizations to scan through a large number of visual elements within a limited time. It also includes a framework for efficiently detecting whether ads within an app violate an extensible set of rules that govern ad placement and display. We have implemented DECAF for Windows-based mobile platforms, and applied it to 1,150 tablet apps and 50,000 phone apps in order to characterize the prevalence of ad frauds. DECAF has been used by the ad fraud team in Microsoft and has helped find many instances of ad frauds.",
"title": ""
},
{
"docid": "1836f3cf9c6243b57fd23b8d84b859d1",
"text": "While most Reinforcement Learning work utilizes temporal discounting to evaluate performance, the reasons for this are unclear. Is it out of desire or necessity? We argue that it is not out of desire, and seek to dispel the notion that temporal discounting is necessary by proposing a framework for undiscounted optimization. We present a metric of undiscounted performance and an algorithm for finding action policies that maximize that measure. The technique, which we call Rlearning, is modelled after the popular Q-learning algorithm [17]. Initial experimental results are presented which attest to a great improvement over Q-learning in some simple cases.",
"title": ""
},
{
"docid": "c1d7990c2c94ffd3ed16cce5947e4e27",
"text": "The introduction of online social networks (OSN) has transformed the way people connect and interact with each other as well as share information. OSN have led to a tremendous explosion of network-centric data that could be harvested for better understanding of interesting phenomena such as sociological and behavioural aspects of individuals or groups. As a result, online social network service operators are compelled to publish the social network data for use by third party consumers such as researchers and advertisers. As social network data publication is vulnerable to a wide variety of reidentification and disclosure attacks, developing privacy preserving mechanisms are an active research area. This paper presents a comprehensive survey of the recent developments in social networks data publishing privacy risks, attacks, and privacy-preserving techniques. We survey and present various types of privacy attacks and information exploited by adversaries to perpetrate privacy attacks on anonymized social network data. We present an in-depth survey of the state-of-the-art privacy preserving techniques for social network data publishing, metrics for quantifying the anonymity level provided, and information loss as well as challenges and new research directions. The survey helps readers understand the threats, various privacy preserving mechanisms, and their vulnerabilities to privacy breach attacks in social network data publishing as well as observe common themes and future directions.",
"title": ""
},
{
"docid": "0d6d2413cbaaef5354cf2bcfc06115df",
"text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "b2e62194ce1eb63e0d13659a546db84b",
"text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.",
"title": ""
},
{
"docid": "062f6ecc9d26310de82572f500cb5f05",
"text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.",
"title": ""
},
{
"docid": "3d8df2c8fcbdc994007104b8d21d7a06",
"text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.",
"title": ""
},
{
"docid": "c7160083cc96253d305b127929e25107",
"text": "This paper considers the task of matching images and sentences. The challenge consists in discriminatively embedding the two modalities onto a shared visual-textual space. Existing work in this field largely uses Recurrent Neural Networks (RNN) for text feature learning and employs off-the-shelf Convolutional Neural Networks (CNN) for image feature extraction. Our system, in comparison, differs in two key aspects. Firstly, we build a convolutional network amenable for fine-tuning the visual and textual representations, where the entire network only contains four components, i.e., convolution layer, pooling layer, rectified linear unit function (ReLU), and batch normalisation. Endto-end learning allows the system to directly learn from the data and fully utilise the supervisions. Secondly, we propose instance loss according to viewing each multimodal data pair as a class. This works with a large margin objective to learn the inter-modal correspondence between images and their textual descriptions. Experiments on two generic retrieval datasets (Flickr30k and MSCOCO) demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language person retrieval, we improve the state of the art by a large margin. Code is available at https://github.com/layumi/ Image-Text-Embedding",
"title": ""
},
{
"docid": "e34b8fd3e1fba5306a88e4aac38c0632",
"text": "1 Jomo was an Assistant Secretary General in the United Nations system responsible for economic research during 2005-2015.; Chowdhury (Chief, Multi-Stakeholder Engagement & Outreach, Financing for Development Office, UN-DESA); Sharma (Senior Economic Affairs Officer, Financing for Development Office, UN-DESA); Platz (Economic Affairs Officer, Financing for Development Office, UN-DESA); corresponding author: Anis Chowdhury (chowdhury4@un.org; anis.z.chowdhury@gmail.com). Thanks to colleagues at the Financing for Development Office of UN-DESA and an anonymous referee for their helpful comments. Thanks also to Alexander Kucharski for his excellent support in gathering data and producing figure charts and to Jie Wei for drawing the flow charts. However, the usual caveats apply. ABSTRACT",
"title": ""
},
{
"docid": "5cbc93a9844fcd026a1705ee031c6530",
"text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
}
] |
scidocsrr
|
c27b18e4d89aafe7e8f93c466a7b757e
|
Ex Machina: Personal Attacks Seen at Scale
|
[
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
},
{
"docid": "8bb74088e1920a3bbf65b8429575b913",
"text": "Deliberative, argumentative discourse is an important component of opinion formation, belief revision, and knowledge discovery; it is a cornerstone of modern civil society. Argumentation is productively studied in branches ranging from theoretical artificial intelligence to political rhetoric, but empirical analysis has suffered from a lack of freely available, unscripted argumentative dialogs. This paper presents the Internet Argument Corpus (IAC), a set of 390, 704 posts in 11, 800 discussions extracted from the online debate site 4forums.com. A 2866 thread/130, 206 post extract of the corpus has been manually sided for topic of discussion, and subsets of this topic-labeled extract have been annotated for several dialogic and argumentative markers: degrees of agreement with a previous post, cordiality, audiencedirection, combativeness, assertiveness, emotionality of argumentation, and sarcasm. As an application of this resource, the paper closes with a discussion of the relationship between discourse marker pragmatics, agreement, emotionality, and sarcasm in the IAC corpus.",
"title": ""
}
] |
[
{
"docid": "4c9aa3eb2b84577cbe505668c2aec80f",
"text": "This paper extends existing word segmentation models to take non-linguistic context into account. It improves the token F-score of a top performing segmentation models by 2.5% on a 27k utterances dataset. We posit that word segmentation is easier in-context because the learner is not trying to access irrelevant lexical items. We use topics from a Latent Dirichlet Allocation model as a proxy for “activities” contexts, to label the Providence corpus. We present Adaptor Grammar models that use these context labels, and we study their performance with and without context annotations at test time.",
"title": ""
},
{
"docid": "eed9000c395f5a5fe327744c712e9b04",
"text": "A core challenge in Business Process Management is the continuous, bi-directional translation between (1) a business requirements view on the process space of an enterprise and (2) the actual process space of this enterprise, constituted by the multiplicity of IT systems, resources, and human labor. Semantic Business Process Management (SBPM) [HeLD'05] is a novel approach of increasing the level of automation in the translation between these two spheres, and is currently driven by major players from the ERP, BPM, and Semantic Web Services domain, namely SAP. One core paradigm of SPBM is to represent the two spheres and their parts using ontology languages and to employ machine reasoning for the automated or semi-automated translation. In this paper, we (1) outline the representational requirements of SBPM, (2) propose a set of ontologies and formalisms, and (3) define the scope of these ontologies by giving competency questions, which is a common technique in the ontology engineering process.",
"title": ""
},
{
"docid": "858f6840881ae7b284149402f279185e",
"text": "Voting in elections is the basis of democracy, but citizens may not be able or willing to go to polling stations to vote on election days. Remote e-voting via the Internet provides the convenience of voting on the voter's own computer or mobile device, but Internet voting systems are vulnerable to many common attacks, affecting the integrity of an election. Distributing the processing of votes over many web servers installed in tamper-resistant, secure environments can improve security: this is possible by using the Smart Card Web Server (SCWS) on a mobile phone Subscriber Identity Module (SIM). This paper proposes a generic model for a voting application installed in the SIM/SCWS, which uses standardised Mobile Network Operator (MNO) management procedures to communicate (via HTTPs) with a voting authority to vote. The generic SCWS voting model is then used with the e-voting system Prêt à Voter. A preliminary security analysis of the proposal is carried out, and further research areas are identified. As the SCWS voting application is used in a distributed processing architecture, e-voting security is enhanced because to compromise an election, an attacker must target many individual mobile devices rather than a centralised web server.",
"title": ""
},
{
"docid": "53371fac3b92afe5bc6c51dccd95fc4b",
"text": "Multi-frequency electrical impedance tomography (EIT) systems require stable voltage controlled current generators that will work over a wide frequency range and with a large variation in load impedance. In this paper we compare the performance of two commonly used designs: the first is a modified Howland circuit whilst the second is based on a current mirror. The output current and the output impedance of both circuits were determined through PSPICE simulation and through measurement. Both circuits were stable over the frequency ranges 1 kHz to 1 MHz. The maximum variation of output current with frequency for the modified Howland circuit was 2.0% and for the circuit based on a current mirror 1.6%. The output impedance for both circuits was greater than 100 kohms for frequencies up to 100 kHz. However, neither circuit achieved this output impedance at 1 MHz. Comparing the results from the two circuits suggests that there is little to choose between them in terms of a practical implementation.",
"title": ""
},
{
"docid": "9049805c56c9b7fc212fdb4c7f85dfe1",
"text": "Intentions (6) Do all the important errands",
"title": ""
},
{
"docid": "740c3b23904fb05384f0d58c680ea310",
"text": "Huge amount data on the internet are in unstructured texts can‟t simply be used for further processing by computer , therefore specific processing method and algorithm require to extract useful pattern. Text mining is process to extract information from the unstructured data. Text classification is task of automatically sorting set of document into categories from predefined set. A major difficulty of text classification is high dimensionality of feature space. Feature selection method used for dimension reduction. This paper describe about text classification process, compare various classifier and also discuss feature selection method for solving problem of high dimensional data and application of text classification.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "1e46143d47f5f221094d0bb09505be80",
"text": "Clinical Scenario: Patients who experience prolonged concussion symptoms can be diagnosed with postconcussion syndrome (PCS) when those symptoms persist longer than 4 weeks. Aerobic exercise protocols have been shown to be effective in improving physical and mental aspects of health. Emerging research suggests that aerobic exercise may be useful as a treatment for PCS, where exercise allows patients to feel less isolated and more active during the recovery process.\n\n\nCLINICAL QUESTION\nIs aerobic exercise more beneficial in reducing symptoms than current standard care in patients with prolonged symptoms or PCS lasting longer than 4 weeks? Summary of Key Findings: After a thorough literature search, 4 studies relevant to the clinical question were selected. Of the 4 studies, 1 study was a randomized control trial and 3 studies were case series. All 4 studies investigated aerobic exercise protocol as treatment for PCS. Three studies demonstrated a greater rate of symptom improvement from baseline assessment to follow-up after a controlled subsymptomatic aerobic exercise program. One study showed a decrease in symptoms in the aerobic exercise group compared with the full-body stretching group. Clinical Bottom Line: There is moderate evidence to support subsymptomatic aerobic exercise as a treatment of PCS; therefore, it should be considered as a clinical option for reducing PCS and prolonged concussion symptoms. A previously validated protocol, such as the Buffalo Concussion Treadmill test, Balke protocol, or rating of perceived exertion, as mentioned in this critically appraised topic, should be used to measure baseline values and treatment progression. Strength of Recommendation: Level C evidence exists that the aerobic exercise protocol is more effective than the current standard of care in treating PCS.",
"title": ""
},
{
"docid": "a8d7f6dcaf55ebd5ec580b2b4d104dd9",
"text": "In this paper we investigate social tags as a novel highvolume source of semantic metadata for music, using techniques from the fields of information retrieval and multivariate data analysis. We show that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre. We introduce the use of Correspondence Analysis to visualise this semantic space, and show how it can be applied to create a browse-by-mood interface for a psychologically-motivated two-dimensional subspace rep resenting musical emotion.",
"title": ""
},
{
"docid": "f442354c5a99ece9571168648285f763",
"text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.",
"title": ""
},
{
"docid": "0e3135a7846cee7f892b99dc4881b461",
"text": "OBJECTIVE: This study examined the relation among children's physical activity, sedentary behaviours, and body mass index (BMI), while controlling for sex, family structure, and socioeconomic status.DESIGN: Epidemiological study examining the relations among physical activity participation, sedentary behaviour (video game use and television (TV)/video watching), and BMI on a nationally representative sample of Canadian children.SUBJECTS: A representative sample of Canadian children aged 7–11 (N=7216) from the 1994 National Longitudinal Survey of Children and Youth was used in the analysis.MEASUREMENTS: Physical activity and sport participation, sedentary behaviour (video game use and TV/video watching), and BMI measured by parental report.RESULTS: Both organized and unorganized sport and physical activity are negatively associated with being overweight (10–24% reduced risk) or obese (23–43% reduced risk), while TV watching and video game use are risk factors for being overweight (17–44% increased risk) or obese (10–61% increased risk). Physical activity and sedentary behaviour partially account for the association of high socioeconomic status and two-parent family structure with the likelihood of being overweight or obese.CONCLUSION: This study provides evidence supporting the link between physical inactivity and obesity of Canadian children.",
"title": ""
},
{
"docid": "b10ad91ce374a772790666da5a79616c",
"text": "Photophobia is a common yet debilitating symptom seen in many ophthalmic and neurologic disorders. Despite its prevalence, it is poorly understood and difficult to treat. However, the past few years have seen significant advances in our understanding of this symptom. We review the clinical characteristics and disorders associated with photophobia, discuss the anatomy and physiology of this phenomenon, and conclude with a practical approach to diagnosis and treatment.",
"title": ""
},
{
"docid": "93a3895a03edcb50af74db901cb16b90",
"text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.",
"title": ""
},
{
"docid": "b44f24b54e45974421f799527391a9db",
"text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.",
"title": ""
},
{
"docid": "63a0eda53c38e434002c561687cf5e10",
"text": "We propose a constructive control design for stabilization of non-periodic trajectories of underactuated robots. An important example of such a system is an underactuated “dynamic walking” biped robot traversing rough or uneven terrain. The stabilization problem is inherently challenging due to the nonlinearity, open-loop instability, hybrid (impact) dynamics, and target motions which are not known in advance. The proposed technique is to compute a transverse linearization about the desired motion: a linear impulsive system which locally represents “transversal” dynamics about a target trajectory. This system is then exponentially stabilized using a modified receding-horizon control design, providing exponential orbital stability of the target trajectory of the original nonlinear system. The proposed method is experimentally verified using a compass-gait walker: a two-degree-of-freedom biped with hip actuation but pointed stilt-like feet. The technique is, however, very general and can be applied to a wide variety of hybrid nonlinear systems.",
"title": ""
},
{
"docid": "ad1d572a7ee58c92df5d1547fefba1e8",
"text": "The primary source for the blood supply of the head of the femur is the deep branch of the medial femoral circumflex artery (MFCA). In posterior approaches to the hip and pelvis the short external rotators are often divided. This can damage the deep branch and interfere with perfusion of the head. We describe the anatomy of the MFCA and its branches based on dissections of 24 cadaver hips after injection of neoprene-latex into the femoral or internal iliac arteries. The course of the deep branch of the MFCA was constant in its extracapsular segment. In all cases there was a trochanteric branch at the proximal border of quadratus femoris spreading on to the lateral aspect of the greater trochanter. This branch marks the level of the tendon of obturator externus, which is crossed posteriorly by the deep branch of the MFCA. As the deep branch travels superiorly, it crosses anterior to the conjoint tendon of gemellus inferior, obturator internus and gemellus superior. It then perforates the joint capsule at the level of gemellus superior. In its intracapsular segment it runs along the posterosuperior aspect of the neck of the femur dividing into two to four subsynovial retinacular vessels. We demonstrated that obturator externus protected the deep branch of the MFCA from being disrupted or stretched during dislocation of the hip in any direction after serial release of all other soft-tissue attachments of the proximal femur, including a complete circumferential capsulotomy. Precise knowledge of the extracapsular anatomy of the MFCA and its surrounding structures will help to avoid iatrogenic avascular necrosis of the head of the femur in reconstructive surgery of the hip and fixation of acetabular fractures through the posterior approach.",
"title": ""
},
{
"docid": "8b3557219674c8441e63e9b0ab459c29",
"text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.",
"title": ""
},
{
"docid": "dbac70b623466c13d6033f6af5520910",
"text": "This paper first presents an improved trajectory-based algorithm for automatically detecting and tracking the ball in broadcast soccer video. Unlike the object-based algorithms, our algorithm does not evaluate whether a sole object is a ball. Instead, it evaluates whether a candidate trajectory, which is generated from the candidate feature image by a candidate verification procedure based on Kalman filter,, which is generated from the candidate feature image by a candidate verification procedure based on Kalman filter, is a ball trajectory. Secondly, a new approach for automatically analyzing broadcast soccer video is proposed, which is based on the ball trajectory. The algorithms in this approach not only improve play-break analysis and high-level semantic event detection, but also detect the basic actions and analyze team ball possession, which may not be analyzed based only on the low-level feature. Moreover, experimental results show that our ball detection and tracking algorithm can achieve above 96% accuracy for the video segments with the soccer field. Compared with the existing methods, a higher accuracy is achieved on goal detection and play-break segmentation. To the best of our knowledge, we present the first solution in detecting the basic actions such as touching and passing, and analyzing the team ball possession in broadcast soccer video.",
"title": ""
}
] |
scidocsrr
|
6983c21d0a12808f443e462b3ce3de13
|
Lucid dreaming treatment for nightmares: a pilot study.
|
[
{
"docid": "5bcccfe91c68d12b8bf78017a477c979",
"text": "SUMMARY\nThe occurrence of lucid dreaming (dreaming while being conscious that one is dreaming) has been verified for 5 selected subjects who signaled that they knew they were dreaming while continuing to dream during unequivocal REM sleep. The signals consisted of particular dream actions having observable concomitants and were performed in accordance with pre-sleep agreement. The ability of proficient lucid dreamers to signal in this manner makes possible a new approach to dream research--such subjects, while lucid, could carry out diverse dream experiments marking the exact time of particular dream events, allowing derivation of of precise psychophysiological correlations and methodical testing of hypotheses.",
"title": ""
}
] |
[
{
"docid": "4a6d231ce704e4acf9320ac3bd5ade14",
"text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",
"title": ""
},
{
"docid": "e4cefd3932ea07682e4eef336dda278b",
"text": "Rubinstein-Taybi syndrome (RSTS) is a developmental disorder characterized by a typical face and distal limbs abnormalities, intellectual disability, and a vast number of other features. Two genes are known to cause RSTS, CREBBP in 60% and EP300 in 8-10% of clinically diagnosed cases. Both paralogs act in chromatin remodeling and encode for transcriptional co-activators interacting with >400 proteins. Up to now 26 individuals with an EP300 mutation have been published. Here, we describe the phenotype and genotype of 42 unpublished RSTS patients carrying EP300 mutations and intragenic deletions and offer an update on another 10 patients. We compare the data to 308 individuals with CREBBP mutations. We demonstrate that EP300 mutations cause a phenotype that typically resembles the classical RSTS phenotype due to CREBBP mutations to a great extent, although most facial signs are less marked with the exception of a low-hanging columella. The limb anomalies are more similar to those in CREBBP mutated individuals except for angulation of thumbs and halluces which is very uncommon in EP300 mutated individuals. The intellectual disability is variable but typically less marked whereas the microcephaly is more common. All types of mutations occur but truncating mutations and small rearrangements are most common (86%). Missense mutations in the HAT domain are associated with a classical RSTS phenotype but otherwise no genotype-phenotype correlation is detected. Pre-eclampsia occurs in 12/52 mothers of EP300 mutated individuals versus in 2/59 mothers of CREBBP mutated individuals, making pregnancy with an EP300 mutated fetus the strongest known predictor for pre-eclampsia. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "d52c31b947ee6edf59a5ef416cbd0564",
"text": "Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models.",
"title": ""
},
{
"docid": "c56daed0cc2320892fad3ac34ce90e09",
"text": "In this paper we describe the open source data analytics platform KNIME, focusing particularly on extensions and modules supporting fuzzy sets and fuzzy learning algorithms such as fuzzy clustering algorithms, rule induction methods, and interactive clustering tools. In addition we outline a number of experimental extensions, which are not yet part of the open source release and present two illustrative examples from real world applications to demonstrate the power of the KNIME extensions.",
"title": ""
},
{
"docid": "806ae85b278c98a9107adeb1f55b8808",
"text": "The present studies report the effects on neonatal rats of oral exposure to genistein during the period from birth to postnatal day (PND) 21 to generate data for use in assessing human risk following oral ingestion of genistein. Failure to demonstrate significant exposure of the newborn pups via the mothers milk led us to subcutaneously inject genistein into the pups over the period PND 1-7, followed by daily gavage dosing to PND 21. The targeted doses throughout were 4 mg/kg/day genistein (equivalent to the average exposure of infants to total isoflavones in soy milk) and a dose 10 times higher than this (40 mg/kg genistein). The dose used during the injection phase of the experiment was based on plasma determinations of genistein and its major metabolites. Diethylstilbestrol (DES) at 10 micro g/kg was used as a positive control agent for assessment of changes in the sexually dimorphic nucleus of the preoptic area (SDN-POA). Administration of 40 mg/kg genistein increased uterus weights at day 22, advanced the mean day of vaginal opening, and induced permanent estrus in the developing female pups. Progesterone concentrations were also decreased in the mature females. There were no effects in females dosed with 4 mg/kg genistein, the predicted exposure level for infants drinking soy-based infant formulas. There were no consistent effects on male offspring at either dose level of genistein. Although genistein is estrogenic at 40 mg/kg/day, as illustrated by the effects described above, this dose does not have the same repercussions as DES in terms of the organizational effects on the SDN-POA.",
"title": ""
},
{
"docid": "7df7377675ac0dfda5bcd22f2f5ba22b",
"text": "Background and Aim. Esthetic concerns in primary teeth have been studied mainly from the point of view of parents. The aim of this study was to study compare the opinions of children aged 5-8 years to have an opinion regarding the changes in appearance of their teeth due to dental caries and the materials used to restore those teeth. Methodology. A total of 107 children and both of their parents (n = 321), who were seeking dental treatment, were included in this study. A tool comprising a questionnaire and pictures of carious lesions and their treatment arranged in the form of a presentation was validated and tested on 20 children and their parents. The validated tool was then tested on all participants. Results. Children had acceptable validity statistics for the tool suggesting that they were able to make informed decisions regarding esthetic restorations. There was no difference between the responses of the children and their parents on most points. Zirconia crowns appeared to be the most acceptable full coverage restoration for primary anterior teeth among both children and their parents. Conclusion. Within the limitations of the study it can be concluded that children in their sixth year of life are capable of appreciating the esthetics of the restorations for their anterior teeth.",
"title": ""
},
{
"docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac",
"text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.",
"title": ""
},
{
"docid": "bd24772c4f75f90fe51841aeb9632e4f",
"text": "Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms.",
"title": ""
},
{
"docid": "17598d7543d81dcf7ceb4cb354fb7c81",
"text": "Bitcoin is the first decentralized crypto-currency that is currently by far the most popular one in use. The bitcoin transaction syntax is expressive enough to setup digital contracts whose fund transfer can be enforced automatically. In this paper, we design protocols for the bitcoin voting problem, in which there are n voters, each of which wishes to fund exactly one of two candidates A and B. The winning candidate is determined by majority voting, while the privacy of individual vote is preserved. Moreover, the decision is irrevocable in the sense that once the outcome is revealed, the winning candidate is guaranteed to have the funding from all n voters. As in previous works, each voter is incentivized to follow the protocol by being required to put a deposit in the system, which will be used as compensation if he deviates from the protocol. Our solution is similar to previous protocols used for lottery, but needs an additional phase to distribute secret random numbers via zero-knowledge-proofs. Moreover, we have resolved a security issue in previous protocols that could prevent compensation from being paid.",
"title": ""
},
{
"docid": "6897a459e95ac14772de264545970726",
"text": "There is a need for a system which provides real-time local environmental data in rural crop fields for the detection and management of fungal diseases. This paper presents the design of an Internet of Things (IoT) system consisting of a device capable of sending real-time environmental data to cloud storage and a machine learning algorithm to predict environmental conditions for fungal detection and prevention. The stored environmental data on conditions such as air temperature, relative air humidity, wind speed, and rain fall is accessed and processed by a remote computer for analysis and management purposes. A machine learning algorithm using Support Vector Machine regression (SVMr) was developed to process the raw data and predict short-term (day-to-day) air temperature, relative air humidity, and wind speed values to assist in predicting the presence and spread of harmful fungal diseases through the local crop field. Together, the environmental data and environmental predictions made easily accessible by this IoT system will ultimately assist crop field managers by facilitating better management and prevention of fungal disease spread.",
"title": ""
},
{
"docid": "704bd445fd9ff34a2d71e8e5b196760c",
"text": "Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a “unidirectional” bottom-up feed-forward fashion. However, biological evidence suggests that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work introduces “bidirectional” architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as latent variables in a global energy function. We call our models convolutional latentvariable models (CLVMs). From a theoretical perspective, CLVMs unify several approaches for recognition, including CNNs, generative deep models (e.g., Boltzmann machines), and discriminative latent-variable models (e.g., DPMs). From a practical perspective, CLVMs are particularly well-suited for multi-task learning. We describe a single architecture that simultaneously achieves state-of-the-art accuracy for tasks spanning both high-level recognition (part detection/localization) and low-level grouping (pixel segmentation). Bidirectional reasoning is particularly helpful for detailed low-level tasks, since they can take advantage of top-down feedback. Our architectures are quite efficient, capable of processing an image in milliseconds. We present results on benchmark datasets with both part/keypoint labels and segmentation masks (such as PASCAL and LFW) that demonstrate a significant improvement over prior art, in both speed and accuracy.",
"title": ""
},
{
"docid": "5745ed6c874867ad2de84b040e40d336",
"text": "The chemokine (C-X-C motif) ligand 1 (CXCL1) regulates tumor-stromal interactions and tumor invasion. However, the precise role of CXCL1 on gastric tumor growth and patient survival remains unclear. In the current study, protein expressions of CXCL1, vascular endothelial growth factor (VEGF) and phospho-signal transducer and activator of transcription 3 (p-STAT3) in primary tumor tissues from 98 gastric cancer patients were measured by immunohistochemistry (IHC). CXCL1 overexpressed cell lines were constructed using Lipofectamine 2000 reagent or lentiviral vectors. Effects of CXCL1 on VEGF expression and local tumor growth were evaluated in vitro and in vivo. CXCL1 was positively expressed in 41.4% of patients and correlated with VEGF and p-STAT3 expression. Higher CXCL1 expression was associated with advanced tumor stage and poorer prognosis. In vitro studies in AGS and SGC-7901 cells revealed that CXCL1 increased cell migration but had little effect on cell proliferation. CXCL1 activated VEGF signaling in gastric cancer (GC) cells, which was inhibited by STAT3 or chemokine (C-X-C motif) receptor 2 (CXCR2) blockade. CXCL1 also increased p-STAT3 expression in GC cells. In vivo, CXCL1 increased xenograft local tumor growth, phospho-Janus kinase 2 (p-JAK2), p-STAT3 levels, VEGF expression and microvessel density. These results suggested that CXCL1 increased local tumor growth through activation of VEGF signaling which may have mechanistic implications for the observed inferior GC survival. The CXCL1/CXCR2 pathway might be potent to improve anti-angiogenic therapy for gastric cancer.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "7f711c94920e0bfa8917ad1b5875813c",
"text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
},
{
"docid": "93a39df6ee080e359f50af46d02cdb71",
"text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "fa62c54cf22c7d0822c7a4171a3d8bcd",
"text": "Interaction with robot systems for specification of manufacturing tasks and motions needs to be simple, to enable wide-spread use of robots in SMEs. In the best case, existing practices from manual work could be used, to smoothly let current employees start using robot technology as a natural part of their work. Our aim is to simplify the robot programming task by allowing the user to simply make technical drawings on a sheet of paper. Craftsman use paper and raw sketches for several situations; to share ideas, to get a better imagination or to remember the customer situation. Currently these sketches have either to be interpreted by the worker when producing the final product by hand, or transferred into CAD file using an according tool. The former means that no automation is included, the latter means extra work and much experience in using the CAD tool. Our approach is to use the digital pen and paper from Anoto as input devices for SME robotic tasks, thereby creating simpler and more user friendly alternatives for programming, parameterization and commanding actions. To this end, the basic technology has been investigated and fully working prototypes have been developed to explore the possibilities and limitation in the context of typical SME applications. Based on the encouraging experimental results, we believe that drawings on digital paper will, among other means of human-robot interaction, play an important role in manufacturing SMEs in the future. Index Terms — CAD, Human machine interfaces, Industrial Robots, Robot programming.",
"title": ""
},
{
"docid": "6f679c5678f1cc5fed0af517005cb6f5",
"text": "In today's world of globalization, there is a serious need of incorporating semantics in Education Domain which is very significant with an ultimate goal of providing an efficient, adaptive and personalized learning environment. An attempt towards this goal has been made to develop an Education based Ontology with some capability to describe a semantic web based sharable knowledge. So as a contribution, this paper presents a revisit towards amalgamating Semantics in Education. In this direction, an effort has been made to construct an Education based Ontology using Protege 5.2.0, where a hierarchy of classes and subclasses have been defined along with their properties, relations, and instances. Finally, at the end of this paper an implementation is also presented involving query retrieval using DLquery illustrations.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] |
scidocsrr
|
65e66ad82fb578764ca436453dbc2756
|
User acceptance of a G2B system: a case of electronic procurement system in Malaysia
|
[
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "669fcb6f51aa8883d037e1de18b1513f",
"text": "Purpose – The purpose of this paper is to present a multi-faceted summary and classification of the existing literature in the field of quality of service for e-government and outline the main components of a quality model for e-government services. Design/methodology/approach – Starting with fundamental quality principles the paper examines and analyzes 36 different quality approaches concerning public sector services, e-services in general and more specifically e-government services. Based on the dimensions measured by each approach the paper classifies the approaches and concludes on the basic factors needed for the development of a complete quality model of e-government services. Findings – Based on the classification of literature approaches, the paper provides information about the main components of a quality model that may be used for the continuous monitoring and measuring of public e-services’ quality. The classification forms the basis for answering questions that must be addressed by the quality model, such as: What to assess?; Who will perform the assessment? and How the assessment will be done? Practical implications – This model can be used by the management of public organizations in order to measure and monitor the quality of e-services delivered to citizens. Originality/value – The results of the work presented in this paper form the basis for the development of a quality model for e-government services.",
"title": ""
}
] |
[
{
"docid": "0ccf6d97ff8a6b664a73056ec8e39dc7",
"text": "1. Resilient healthcare This integrative review focuses on the methodological strategies employed by studies on resilient healthcare. Resilience engineering (RE), which involves the study of coping with complexity (Woods and Hollnagel, 2006) in modern socio-technical systems (Bergström et al., 2015); emerged in about 2000. The RE discipline is quickly developing, and it has been applied to healthcare, aviation, the petrochemical industry, nuclear power plants, railways, manufacturing, natural disasters and other fields (Righi et al., 2015). The term ‘resilient healthcare’ (RHC) refers to the application of the concepts and methods of RE in the healthcare field, specifically regarding patient safety (Hollnagel et al., 2013a). Instead of the traditional risk management approach based on retrospective analyses of errors, RHC focuses on ‘everyday clinical work’, specifically on the ways it unfolds in practice (Braithwaite et al., 2017). Wears et al. (2015) defined RHC as follows. The ability of the health care system (a clinic, a ward, a hospital, a county) to adjust its functioning prior to, during, or following events (changes, disturbances or opportunities), and thereby sustain required operations under both expected and unexpected conditions. (p. xxvii) After more than a decade of theoretical development in the field of resilience, scholars are beginning to identify its methodological challenges (Woods, 2015; Nemeth and Herrera, 2015). The lack of welldefined constructs to conceptualize resilience challenges the ability to operationalize those constructs in empirical research (Righi et al., 2015; Wiig and Fahlbruch, forthcoming). Further, studying complexity requires challenging methodological designs to obtain evidence about the tested constructs to inform and further develop theory (Bergström and Dekker, 2014). It is imperative to gather emerging knowledge on applied methodology in empirical RHC research to map and discuss the methodological strategies in the healthcare domain. The insights gained might create and refine methodological designs to enable further development of RHC concepts and theory. This study aimed to describe and synthesize the methodological strategies currently applied in https://doi.org/10.1016/j.ssci.2018.08.025 Received 10 October 2016; Received in revised form 13 August 2018; Accepted 27 August 2018 ⁎ Corresponding author. E-mail addresses: siv.hilde.berg@sus.no (S.H. Berg), Kristin.akerjordet@uis.no (K. Akerjordet), mirjam.ekstedt@lnu.se (M. Ekstedt), karina.aase@uis.no (K. Aase). Safety Science 110 (2018) 300–312 Available online 05 September 2018 0925-7535/ © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/BY-NC-ND/4.0/). T empirical RHC research in terms of the empirical fields, applied research designs, methods, analytical strategies, main topics and data collection sources at different systemic levels, and to assess the quality of those studies. We argue that one implication of studying sociotechnical systems is that multiple levels in a given system must be addressed, as proposed by, for example, Rasmussen (1997). As such, this study synthesized the ways that RHC studies have approached empirical data at various systemic levels. 2. Methodology in resilient healthcare research ‘Research methodology’ is a strategy or plan of action that shapes the choices and uses of various methods and links them to desired outcomes (Crotty, 1998). This study broadly used the term ‘methodological strategy’ to denote an observed study’s overall research design, data collection sources, data collection methods and analytical methods at different systemic levels. The methodological issues discussed in the RHC literature to date have concerned the methods used to study everyday clinical practice, healthcare complexity and the operationalization of the constructs measuring resilience. 2.1. Methods of studying healthcare complexity RE research is characterized by its study of complexities. In a review of the rationale behind resilience research, Bergström et al. (2015) found that RE researchers typically justified their research by referring to the complexity of modern socio-technical systems that makes them inherently risky. Additionally, in the healthcare field, references are made to the complex adaptive system (CAS) perspective (Braithwaite et al., 2013). CAS emerged from complexity theory, and it takes a dynamic approach to human and nonhuman agents (Urry, 2003). Healthcare is part of a complex socio-technical system and an example of a CAS comprising professionals, patients, managers, policymakers and technologies, all of which interact with and rely on trade-offs and adjustments to succeed in everyday clinical work (Braithwaite et al., 2013). Under complexity theory, complex systems are viewed as open systems that interact with their environments, implying a need to understand the systems’ environments before understanding the systems. Because these environments are complex, no standard methodology can provide a complete understanding (Bergström and Dekker, 2014), and the opportunities for experimental research are limited. Controlled studies might not be able to identify the complex interconnections and multiple variables that influence care; thus, non-linear methods are necessary to describe and understand those systems. Consequently, research on complexity imposes methodological challenges related to the development of valid evidence (Braithwaite et al., 2013). It has been argued that triangulation is necessary to study complex work settings in order to reveal actual phenomena and minimize bias leading to misinterpretation (Nemeth et al., 2011). Methodological triangulation has been suggested, as well as data triangulation, as a strategic way to increase the internal and external validity of RE/RHC research (Nemeth et al., 2011; Mendonca, 2008). Data triangulation involves collecting data from various sources, such as reports, policy documents, multiple professional groups and patient feedback, whereas methodological triangulation involves combining different qualitative methods or mixing qualitative and quantitative methods. Multiple methods have been suggested for research on everyday clinical practice and healthcare complexity. Hollnagel (2014) suggested qualitative methods, such as qualitative interviews, field observations and organizational development techniques (e.g. appreciative inquiry and cooperative inquiry). Nemeth and Herrera (2015) proposed observation in actual settings as a core value of the RE field of practice. Drawing on the methods of cognitive system engineering, Nemeth et al. (2011) described the uses of cognitive task analysis (CTA) to study resilience. CTA comprises numerous methods, one of which is the critical decision method (CDM). CDM is a retrospective interview in which subjects are asked about critical events and decisions. Other proposed methods for studying complex work settings were work domain analysis (WDA), process tracing, artefact analysis and rapid prototyping. System modelling, using methods such as trend analysis, cluster analysis, social network analysis and log linear modelling, has been proposed as a way to study resilience from a socio-technical/CAS perspective (Braithwaite et al., 2013; Anderson et al., 2013). The functional resonance analysis method (FRAM) has been employed to study interactions and dependencies as they develop in specific situations. FRAM is presented as a way to study how complex and dynamic sociotechnical systems work (Hollnagel, 2012). In addition, Leveson et al. (2006) suggested STAMP, a model of accident causation based on systems theory, as a method to analyse resilience. 2.2. Operationalization of resilience A vast amount of the RE literature has been devoted to developing theories on resilience, emphasizing that the domain is in a theory development stage (Righi et al., 2015). This process of theory development is reflected in the diverse definitions and indicators of resilience proposed over the past decade e.g. 3, (Woods, 2006, 2011; Wreathall, 2006). Numerous constructs have been developed, such as resilient abilities (Woods, 2011; Hollnagel, 2008, 2010; Nemeth et al., 2008; Hollnagel et al., 2013b), Safety-II (Hollnagel, 2014), Work-as-done (WAD) and Work-as-imagined (WAI) (Hollnagel et al., 2015), and performance variability (Hollnagel, 2014). The operationalization of these constructs has been a topic of discussion. According to Westrum (2013), one challenge to determining measures of resilience in healthcare relates to the characteristics of resilience as a family of related ideas rather than as a single construct. The applied definitions of ‘resilience’ in RE research have focused on a given system’s adaptive capacities and its abilities to adopt or absorb disturbing conditions. This conceptual understanding of resilience has been applied to RHC [6, p. xxvii]. By understanding resilience as a ‘system’s ability’, the healthcare system is perceived as a separate ontological category. The system is regarded as a unit that might have individual goals, actions or abilities not necessarily shared by its members. Therefore, RHC is greater than the sum of its members’ individual actions, which is a perspective found in methodological holism (Ylikoski, 2012). The challenge is to operationalize the study of ‘the system as a whole’. Some scholars have advocated on behalf of locating the empirical basis of resilience by studying individual performances and aggregating those data to develop a theory of resilience (Mendonca, 2008; Furniss et al., 2011). This approach uses the strategy of finding the properties of the whole (the healthcare system) within the parts at the micro level, which is found in methodological individualism. The WAD and performance variability constructs bring resilience closer to an empirical ground by fr",
"title": ""
},
{
"docid": "a86114aeee4c0bc1d6c9a761b50217d4",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
},
{
"docid": "f033c98f752c8484dc616425ebb7ce5b",
"text": "Ethnography is the study of social interactions, behaviours, and perceptions that occur within groups, teams, organisations, and communities. Its roots canbe traced back to anthropological studies of small, rural (andoften remote) societies thatwereundertaken in the early 1900s, when researchers such as Bronislaw Malinowski and Alfred Radcliffe-Brown participated in these societies over long periods and documented their social arrangements and belief systems. This approach was later adopted by members of the Chicago School of Sociology (for example, Everett Hughes, Robert Park, Louis Wirth) and applied to a variety of urban settings in their studies of social life. The central aim of ethnography is to provide rich, holistic insights into people’s views and actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the collection of detailed observations and interviews. As Hammersley states, “The task [of ethnographers] is to document the culture, the perspectives and practices, of the people in these settings.The aim is to ‘get inside’ theway each groupof people sees theworld.” Box 1 outlines the key features of ethnographic research. Examples of ethnographic researchwithin thehealth services literature include Strauss’s study of achieving and maintaining order between managers, clinicians, and patients within psychiatric hospital settings; Taxis and Barber’s exploration of intravenous medication errors in acute care hospitals; Costello’s examination of death and dying in elderly care wards; and Østerlund’s work on doctors’ and nurses’ use of traditional and digital information systems in their clinical communications. Becker and colleagues’ Boys in White, an ethnographic study of medical education in the late 1950s, remains a classic in this field. Newer developments in ethnographic inquiry include auto-ethnography, in which researchers’ own thoughts andperspectives fromtheir social interactions form the central element of a study; meta-ethnography, in which qualitative research texts are analysed and synthesised to empirically create new insights and knowledge; and online (or virtual) ethnography, which extends traditional notions of ethnographic study from situated observation and face to face researcher-participant interaction to technologically mediated interactions in online networks and communities.",
"title": ""
},
{
"docid": "cc12a6ccdfbe2242eb4f9f72d5a17cd2",
"text": "Software is everywhere, from mission critical systems such as industrial power stations, pacemakers and even household appliances. This growing dependence on technology and the increasing complexity software has serious security implications as it means we are potentially surrounded by software that contain exploitable vulnerabilities. These challenges have made binary analysis an important area of research in computer science and has emphasized the need for building automated analysis systems that can operate at scale, speed and efficacy; all while performing with the skill of a human expert. Though great progress has been made in this area of research, there remains limitations and open challenges to be addressed. Recognizing this need, DARPA sponsored the Cyber Grand Challenge (CGC), a competition to showcase the current state of the art in systems that perform; automated vulnerability detection, exploit generation and software patching. This paper is a survey of the vulnerability detection and exploit generation techniques, underlying technologies and related works of two of the winning systems Mayhem and Mechanical Phish. Keywords—Cyber reasoning systems, automated binary analysis, automated exploit generation, dynamic symbolic execution, fuzzing",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "1657df28bba01b18fb26bb8c823ad4b4",
"text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.",
"title": ""
},
{
"docid": "a9a7916c7cb3d2c56457b0cc5cb0471c",
"text": "In this paper, we propose a novel approach to integrating inertial sensor data into a pose-graph free dense mapping algorithm that we call GravityFusion. A range of dense mapping algorithms have recently been proposed, though few integrate inertial sensing. We build on ElasticFusion, a particularly elegant approach that fuses color and depth information directly into small surface patches called surfels. Traditional inertial integration happens at the level of camera motion, however, a pose graph is not available here. Instead, we present a novel approach that incorporates the gravity measurements directly into the map: Each surfel is annotated by a gravity measurement, and that measurement is updated with each new observation of the surfel. We use mesh deformation, the same mechanism used for loop closure in ElasticFusion, to enforce a consistent gravity direction among all the surfels. This eliminates drift in two degrees of freedom, avoiding the typical curving of maps that are particularly pronounced in long hallways, as we qualitatively show in the experimental evaluation.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "2ce789863ff0d3359f741adddb09b9f1",
"text": "The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found using metadata by search engines. In this paper we explore the extent to which a search query can be used as the true label for detection of sound events in videos. We present a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, and we obtain a prediction on 3.7 million web video segments. We evaluated performance using the search query as true label and compare it with human labeling. Both types of ground truth exhibited close performance, to within 10%, and similar performance trend with increasing number of evaluated segments. Hence, our experiments show potential for using search query as a preliminary true label for sound event recognition in web videos.",
"title": ""
},
{
"docid": "38f19c7087d5529e2f6b84beca42de3a",
"text": "We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking, and POS tagging). Misconceptions and inconsistent conclusions in existing literature are examined and clarified under statistical experiments. In the comparison and analysis process, we reach several practical conclusions which can be useful to practitioners.",
"title": ""
},
{
"docid": "7fd21ee95850fec1f1e00b766eebbc06",
"text": "HP’s StoreAll with Express Query is a scalable commercial file archiving product that offers sophisticated file metadata management and search capabilities [3]. A new REST API enables fast, efficient searching to find all files that meet a given set of metadata criteria and the ability to tag files with custom metadata fields. The product brings together two significant systems: a scale out file system and a metadata database based on LazyBase [10]. In designing and building the combined product, we identified several real-world issues in using a pipelined database system in a distributed environment, and overcame several interesting design challenges that were not contemplated by the original research prototype. This paper highlights our experiences.",
"title": ""
},
{
"docid": "3d9f1288235847f6c4e9b2c0966c51e9",
"text": "Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.",
"title": ""
},
{
"docid": "1d0a84f55e336175fa60d3fa9eec9664",
"text": "In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80% corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"title": ""
},
{
"docid": "1590742097219610170bd62eb3799590",
"text": "In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.",
"title": ""
},
{
"docid": "36867b8478a8bd6be79902efd5e9d929",
"text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.",
"title": ""
},
{
"docid": "c743c63848ca96f0eb47090ea648d897",
"text": "Cyber-Physical Systems (CPSs) are the future generation of highly connected embedded systems having applications in diverse domains including Oil and Gas. Employing Product Line Engineering (PLE) is believed to bring potential benefits with respect to reduced cost, higher productivity, higher quality, and faster time-to-market. However, relatively few industrial field studies are reported regarding the application of PLE to develop large-scale systems, and more specifically CPSs. In this paper, we report about our experiences and insights gained from investigating the application of model-based PLE at a large international organization developing subsea production systems (typical CPSs) to manage the exploitation of oil and gas production fields. We report in this paper 1) how two systematic domain analyses (on requirements engineering and product configuration/derivation) were conducted to elicit CPS PLE requirements and challenges, 2) key results of the domain analysis (commonly observed in other domains), and 3) our initial experience of developing and applying two Model Based System Engineering (MBSE) PLE solution to address some of the requirements and challenges elicited during the domain analyses.",
"title": ""
},
{
"docid": "cbf10563c5eb251f765b93be554b7439",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "59e49a798fed8479df98435003f4647e",
"text": "The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single-depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the usability of applications that involve interaction with external objects, such as sport training or exercising systems. The problem becomes more critical when Kinect incorrectly perceives body parts. This is because applications have limited information about the recognition correctness, and using those parts to synthesize body postures would result in serious visual artifacts. In this paper, we propose a new method to reconstruct valid movement from incomplete and noisy postures captured by Kinect. We first design a set of measurements that objectively evaluates the degree of reliability on each tracked body part. By incorporating the reliability estimation into a motion database query during run time, we obtain a set of similar postures that are kinematically valid. These postures are used to construct a latent space, which is known as the natural posture space in our system, with local principle component analysis. We finally apply frame-based optimization in the space to synthesize a new posture that closely resembles the true user posture while satisfying kinematic constraints. Experimental results show that our method can significantly improve the quality of the recognized posture under severely occluded environments, such as a person exercising with a basketball or moving in a small room.",
"title": ""
}
] |
scidocsrr
|
777bbe1278ca8be1d239feb3d34eceec
|
BSIF: Binarized statistical image features
|
[
{
"docid": "13cb08194cf7254932b49b7f7aff97d1",
"text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this computer vision using local binary patterns that gives the best reasons to read. When you really need to get the reason why, this computer vision using local binary patterns book will probably make you feel curious.",
"title": ""
}
] |
[
{
"docid": "a9d516ede8966dde5e79ea1304bbedb9",
"text": "Successful implementation of Information Technology can be judged or predicted from the user acceptance. Technology acceptance model (TAM) is a model that is built to analyze and understand the factors that influence the acceptance of the use of technologies based on the user's perspective. In other words, TAM offers a powerful explanation related to acceptance of the technology and its behavior. TAM model has been applied widely to evaluate various information systems or information technology (IS/IT), but it is the lack of research related to the evaluation of the TAM model itself. This study aims to determine whether the model used TAM is still relevant today considering rapid development of information & communication technology (ICT). In other words, this study would like to test whether the TAM measurement indicators are valid and can represent each dimension of the model. The method used is quantitative method with factor analysis approach. The results showed that all indicators valid and can represent each dimension of TAM, those are perceived usefulness, perceived ease of use and behavioral intention to use. Thus the TAM model is still relevant used to measure the user acceptance of technology.",
"title": ""
},
{
"docid": "5aa14ba34672f4afa9c27f7f863d8c57",
"text": "Knowledge distillation is an effective approach to transferring knowledge from a teacher neural network to a student target network for satisfying the low-memory and fast running requirements in practice use. Whilst being able to create stronger target networks compared to the vanilla non-teacher based learning strategy, this scheme needs to train additionally a large teacher model with expensive computational cost. In this work, we present a Self-Referenced Deep Learning (SRDL) strategy. Unlike both vanilla optimisation and existing knowledge distillation, SRDL distils the knowledge discovered by the in-training target model back to itself to regularise the subsequent learning procedure therefore eliminating the need for training a large teacher model. SRDL improves the model generalisation performance compared to vanilla learning and conventional knowledge distillation approaches with negligible extra computational cost. Extensive evaluations show that a variety of deep networks benefit from SRDL resulting in enhanced deployment performance on both coarse-grained object categorisation tasks (CIFAR10, CIFAR100, Tiny ImageNet, and ImageNet) and fine-grained person instance identification tasks (Market-1501).",
"title": ""
},
{
"docid": "909ec68a644cfd1d338270ee67144c23",
"text": "We have constructed an optical tweezer using two lasers (830 nm and 1064 nm) combined with micropipette manipulation having sub-pN force sensitivity. Sample position is controlled within nanometer accuracy using XYZ piezo-electric stage. The position of the bead in the trap is monitored using single particle laser backscattering technique. The instrument is automated to operate in constant force, constant velocity or constant position measurement. We present data on single DNA force-extension, dynamics of DNA integration on membranes and optically trapped bead–cell interactions. A quantitative analysis of single DNA and protein mechanics, assembly and dynamics opens up new possibilities in nanobioscience.",
"title": ""
},
{
"docid": "cde1b5f21bdc05aa5a86aa819688d63c",
"text": "This paper presents two fuzzy portfolio selection models where the objective is to minimize the downside risk constrained by a given expected return. We assume that the rates of returns on securities are approximated as LR-fuzzy numbers of the same shape, and that the expected return and risk are evaluated by interval-valued means. We establish the relationship between those mean-interval definitions for a given fuzzy portfolio by using suitable ordering relations. Finally, we formulate the portfolio selection problem as a linear program when the returns on the assets are of trapezoidal form. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ea0e8b5bf62de6205bd993610f663f50",
"text": "Design Thinking has collected theories and best-practices to foster creativity and innovation in group processes. This is in particular valuable for sketchy and complex problems. Other disciplines can learn from this body-of-behaviors and values to tackle their complex problems. In this paper, using four Design Thinking qualities, we propose a framework to identify the level of Design Thinkingness in existing analytical software engineering tools: Q1) Iterative Creation Cycles, Q2) Human Integration in Design, Q3) Suitability for Heterogeneity, and Q4) Media Accessibility. We believe that our framework can also be used to transform tools in various engineering areas to support abductive and divergent thinking processes. We argue, based on insights gained from the successful transformation of classical business process modeling into tangible business process modeling. This was achieved by incorporating rapid prototyping, human integration, knowledge base heterogeneity and the media-models theory. The latter is given special attention as it allows us to break free from the limiting factors of the exiting analytic tools.",
"title": ""
},
{
"docid": "786a70f221a70038f930352e8022ae29",
"text": "We present IndoNet, a multilingual lexical knowledge base for Indian languages. It is a linked structure of wordnets of 18 different Indian languages, Universal Word dictionary and the Suggested Upper Merged Ontology (SUMO). We discuss various benefits of the network and challenges involved in the development. The system is encoded in Lexical Markup Framework (LMF) and we propose modifications in LMF to accommodate Universal Word Dictionary and SUMO. This standardized version of lexical knowledge base of Indian Languages can now easily be linked to similar global resources.",
"title": ""
},
{
"docid": "63c550438679c0353c2f175032a73369",
"text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.",
"title": ""
},
{
"docid": "6d777bd24d9e869189c388af94384fa1",
"text": "OBJECTIVE\nThe aim of this study was to explore the efficacy of insulin-loaded trimethylchitosan nanoparticles on certain destructive effects of diabetes type one.\n\n\nMATERIALS AND METHODS\nTwenty-five male Wistar rats were randomly divided into three control groups (n=5) and two treatment groups (n=5). The control groups included normal diabetic rats without treatment and diabetic rats treated with the nanoparticles. The treatment groups included diabetic rats treated with the insulin-loaded trimethylchitosan nanoparticles and the diabetic rats treated with trade insulin. The experiment period was eight weeks and the rats were treated for the last two weeks.\n\n\nRESULT\nThe livers of the rats receiving both forms of insulin showed less severe microvascular steatosis and fatty degeneration, and ameliorated blood glucose, serum biomarkers, and oxidant/antioxidant parameters with no significant differences. The gene expression of pyruvate kinase could be compensated by both the treatment protocols and the new coated form of insulin could not significantly influence the gene expression of glucokinase (p<0.05). The result of the present study showed the potency of the nanoparticle form of insulin to attenuate hyperglycemia, oxidative stress, and inflammation in diabetes, which indicate the bioavailability of insulin-encapsulated trimethylchitosan nanoparticles.",
"title": ""
},
{
"docid": "376ea61271c36d1d8edbd869da910666",
"text": "Purpose – Many thought leaders are promoting information technology (IT) governance and its supporting practices as an approach to improve business/IT alignment. This paper aims to further explore this assumed positive relationship between IT governance practices and business/IT alignment. Design/methodology/approach – This paper explores the relationship between the use of IT governance practices and business/IT alignment, by creating a business/IT alignment maturity benchmark and qualitatively comparing the use of IT governance practices in the extreme cases. Findings – The main conclusion of the research is that all extreme case organisations are leveraging a broad set of IT governance practices, and that IT governance practices need to obtain at least a maturity level 2 (on a scale of 5) to positively influence business/IT alignment. Also, a list of 11 key enabling IT governance practices is identified. Research limitations/implications – This research adheres to the process theory, implying a limited definition of prediction. An important opportunity for future research lies in the domain of complementary statistical correlation research. Practical implications – This research identifies key IT governance practices that organisations can leverage to improve business/IT alignment. Originality/value – This research contributes to new theory building in the IT governance and alignment domain and provides practitioners with insight on how to implement IT governance in their organisations.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "6cf7a5286a03190b0910380830968351",
"text": "In this paper, the mechanical and aerodynamic design, carbon composite production, hierarchical control system design and vertical flight tests of a new unmanned aerial vehicle, which is capable of VTOL (vertical takeoff and landing) like a helicopter and long range horizontal flight like an airplane, are presented. Real flight tests show that the aerial vehicle can successfully operate in VTOL mode. Kalman filtering is employed to obtain accurate roll and pitch angle estimations.",
"title": ""
},
{
"docid": "5ed8f3b58ae1320411f15a4d7c0f5634",
"text": "With the advent of the ubiquitous era, context-based music recommendation has become one of rapidly emerging applications. Context-based music recommendation requires multidisciplinary efforts including low level feature extraction, music mood classification and human emotion prediction. Especially, in this paper, we focus on the implementation issues of context-based mood classification and music recommendation. For mood classification, we reformulate it into a regression problem based on support vector regression (SVR). Through the use of the SVR-based mood classifier, we achieved 87.8% accuracy. For music recommendation, we reason about the user's mood and situation using both collaborative filtering and ontology technology. We implement a prototype music recommendation system based on this scheme and report some of the results that we obtained.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "2343f238e92a74e3f456b2215b18ad20",
"text": "Nonlinear activation function is one of the main building blocks of artificial neural networks. Hyperbolic tangent and sigmoid are the most used nonlinear activation functions. Accurate implementation of these transfer functions in digital networks faces certain challenges. In this paper, an efficient approximation scheme for hyperbolic tangent function is proposed. The approximation is based on a mathematical analysis considering the maximum allowable error as design parameter. Hardware implementation of the proposed approximation scheme is presented, which shows that the proposed structure compares favorably with previous architectures in terms of area and delay. The proposed structure requires less output bits for the same maximum allowable error when compared to the state-of-the-art. The number of output bits of the activation function determines the bit width of multipliers and adders in the network. Therefore, the proposed activation function results in reduction in area, delay, and power in VLSI implementation of artificial neural networks with hyperbolic tangent activation function.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "10706a3915da7a66696816af7bd1f638",
"text": "In this paper, we present a family of fluxgate magnetic sensors on printed circuit boards (PCBs), suitable for an electronic compass. This fabrication process is simple and inexpensive and uses commercially available thin ferromagnetic materials. We developed and analyzed the prototype sensors with software tools based on the finite-element method. We developed both singleand double-axis planar fluxgate magnetic sensors as well as front-end circuitry based on second-harmonic detection. Two amorphous magnetic materials, Vitrovac 6025X (25 mum thick) and Vitrovac 6025Z (20 mum thick), were used as the ferromagnetic core. We found that the same structures can be made with Metglas ferromagnetic core. The double-axis fluxgate magnetic sensor has a sensitivity of about 1.25 mV/muT with a linearity error of 1.5% full scale, which is suitable for detecting Earth's magnetic field (plusmn60 muT full-scale) in an electronic compass",
"title": ""
},
{
"docid": "8d9be82bfc32a4631f1b1f24e1d962a9",
"text": "Determine an optimal set of design parameter of PR whose DW fits a prescribed workspace as closely as possible is an important and foremost design task before manufacturing. In this paper, an optimal design method of a linear Delta robot (LDR) to obtain the prescribed cuboid dexterous workspace (PCDW) is proposed. The optical algorithms are based on the concept of performance chart. The performance chart shows the relationship between a criterion and design parameters graphically and globally. The kinematic problem is analyzed in brief to determine the design parameters and their relation. Two algorithms are designed to determine the maximal inscribed rectangle of dexterous workspace in the O-xy plane and plot the performance chart. As an applying example, a design result of the LDR with a prescribed cuboid dexterous workspace is presented. The optical results shown that every corresponding maximal inscribed rectangle can be obtained for every given RATE by the algorithm and the error of RATE is less than 0.05.The method and the results of this paper are very useful for the design and comparison of the parallel robot. Key-Words: Parallel Robot, Cuboid Dexterous Workspace, Optimal Design, performance chart ∗ This work is supported by Zhejiang Province Education Funded Grant #20051392.",
"title": ""
},
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] |
scidocsrr
|
35ecb6181280a474aa2de6c410750227
|
Parallelizing Skip Lists for In-Memory Multi-Core Database Systems
|
[
{
"docid": "5ea65d6e878d2d6853237a74dbc5a894",
"text": "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called “Cache-Sensitive Search Trees” (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array offsets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with ∗This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. a small space overhead, we can reduce the cost of binary search on the array by more than a factor of two. We also show that our technique dominates B+-trees, T-trees, and binary search trees in terms of both space and time. A cache simulation verifies that the gap is due largely to cache misses.",
"title": ""
},
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
},
{
"docid": "00f88387c8539fcbed2f6ec4f953438d",
"text": "We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.\n On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.",
"title": ""
},
{
"docid": "45c006e52bdb9cfa73fd4c0ebf692dfe",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
}
] |
[
{
"docid": "ddb36948e400c970309bd0886bfcfccb",
"text": "1 Introduction \"S pace\" and \"place\" are familiar words denoting common \"Sexperiences. We live in space. There is no space for an-< • / other building on the lot. The Great Plains look spacious. Place is security, space is freedom: we are attached to the one and long for the other. There is no place like home. What is home? It is the old homestead, the old neighborhood, home-town, or motherland. Geographers study places. Planners would like to evoke \"a sense of place.\" These are unexceptional ways of speaking. Space and place are basic components of the lived world; we take them for granted. When we think about them, however, they may assume unexpected meanings and raise questions we have not thought to ask. What is space? Let an episode in the life of the theologian Paul Tillich focus the question so that it bears on the meaning of space in experience. Tillich was born and brought up in a small town in eastern Germany before the turn of the century. The town was medieval in character. Surrounded by a wall and administered from a medieval town hall, it gave the impression of a small, protected, and self-contained world. To an imaginative child it felt narrow and restrictive. Every year, however young Tillich was able to escape with his family to the Baltic Sea. The flight to the limitless horizon and unrestricted space 3 4 Introduction of the seashore was a great event. Much later Tillich chose a place on the Atlantic Ocean for his days of retirement, a decision that undoubtedly owed much to those early experiences. As a boy Tillich was also able to escape from the narrowness of small-town life by making trips to Berlin. Visits to the big city curiously reminded him of the sea. Berlin, too, gave Tillich a feeling of openness, infinity, unrestricted space. 1 Experiences of this kind make us ponder anew the meaning of a word like \"space\" or \"spaciousness\" that we think we know well. What is a place? What gives a place its identity, its aura? These questions occurred to the physicists Niels Bohr and Werner Heisenberg when they visited Kronberg Castle in Denmark. Bohr said to Heisenberg: Isn't it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the …",
"title": ""
},
{
"docid": "a86dac3d0c47757ce8cad41499090b8e",
"text": "We propose a theory of regret regulation that distinguishes regret from related emotions, specifies the conditions under which regret is felt, the aspects of the decision that are regretted, and the behavioral implications. The theory incorporates hitherto scattered findings and ideas from psychology, economics, marketing, and related disciplines. By identifying strategies that consumers may employ to regulate anticipated and experienced regret, the theory identifies gaps in our current knowledge and thereby outlines opportunities for future research.",
"title": ""
},
{
"docid": "76cc47710ab6fa91446844368821c991",
"text": "Recommender systems (RSs) have been successfully applied to alleviate the problem of information overload and assist users' decision makings. Multi-criteria recommender systems is one of the RSs which utilizes users' multiple ratings on different aspects of the items (i.e., multi-criteria ratings) to predict user preferences. Traditional approaches simply treat these multi-criteria ratings as addons, and aggregate them together to serve for item recommendations. In this paper, we propose the novel approaches which treat criteria preferences as contextual situations. More specifically, we believe that part of multiple criteria preferences can be viewed as contexts, while others can be treated in the traditional way in multi-criteria recommender systems. We compare the recommendation performance among three settings: using all the criteria ratings in the traditional way, treating all the criteria preferences as contexts, and utilizing selected criteria ratings as contexts. Our experiments based on two real-world rating data sets reveal that treating criteria preferences as contexts can improve the performance of item recommendations, but they should be carefully selected. The hybrid model of using selected criteria preferences as contexts and the remaining ones in the traditional way is finally demonstrated as the overall winner in our experiments.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: Linda.Marion@drexel.edu. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "66432ab91b459c3de8e867c8214029d8",
"text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.",
"title": ""
},
{
"docid": "44ff7fa960b3c91cd66c5fbceacfba3d",
"text": "God gifted sense of vision to the human being is an important aspect of our life. But there are some unfortunate people who lack the ability of visualizing things. The visually impaired have to face many challenges in their daily life. The problem gets worse when there is an obstacle in front of them. Blind stick is an innovative stick designed for visually disabled people for improved navigation. The paper presents a theoretical system concept to provide a smart ultrasonic aid for blind people. The system is intended to provide overall measures – Artificial vision and object detection. The aim of the overall system is to provide a low cost and efficient navigation aid for a visually impaired person who gets a sense of artificial vision by providing information about the environmental scenario of static and dynamic objects around them. Ultrasonic sensors are used to calculate distance of the obstacles around the blind person to guide the user towards the available path. Output is in the form of sequence of beep sound which the blind person can hear.",
"title": ""
},
{
"docid": "5a2be4e590d31b0cb553215f11776a15",
"text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.",
"title": ""
},
{
"docid": "ebb8e498650191ea148ce1b97f443b21",
"text": "Many learning algorithms use a metric defined over the input s ace as a principal tool, and their performance critically depends on the quality of this metric. We address the problem of learning metrics using side-information in the form of equi valence constraints. Unlike labels, we demonstrate that this type of side-information can sometim es be automatically obtained without the need of human intervention. We show how such side-inform ation can be used to modify the representation of the data, leading to improved clustering and classification. Specifically, we present the Relevant Component Analysis (R CA) algorithm, which is a simple and efficient algorithm for learning a Mahalanobis metric. W e show that RCA is the solution of an interesting optimization problem, founded on an informa tion theoretic basis. If dimensionality reduction is allowed within RCA, we show that it is optimally accomplished by a version of Fisher’s linear discriminant that uses constraints. Moreover, unde r certain Gaussian assumptions, RCA can be viewed as a Maximum Likelihood estimation of the within cl ass covariance matrix. We conclude with extensive empirical evaluations of RCA, showing its ad v ntage over alternative methods.",
"title": ""
},
{
"docid": "08731e24a7ea5e8829b03d79ef801384",
"text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "3cc84fda5e04ccd36f5b632d9da3a943",
"text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.",
"title": ""
},
{
"docid": "c5dacb6e808c30b0e7c603c3ee93fe2b",
"text": "Deep learning presents many opportunities for image-based plant phenotyping. Here we consider the capability of deep convolutional neural networks to perform the leaf counting task. Deep learning techniques typically require large and diverse datasets to learn generalizable models without providing a priori an engineered algorithm for performing the task. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high. In this work we propose a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants. We demonstrate that the use of high-quality 3D synthetic plants to augment a dataset can improve performance on the leaf counting task. We also show that the ability of the model to generate an arbitrary distribution of phenotypes mitigates the problem of dataset shift when training and testing on different datasets. Finally, we show that real and synthetic plants are significantly interchangeable when training a neural network on the leaf counting task.",
"title": ""
},
{
"docid": "62b8d1ecb04506794f81a47fccb63269",
"text": "This paper addresses the mode collapse for generative adversarial networks (GANs). We view modes as a geometric structure of data distribution in a metric space. Under this geometric lens, we embed subsamples of the dataset from an arbitrary metric space into the `2 space, while preserving their pairwise distance distribution. Not only does this metric embedding determine the dimensionality of the latent space automatically, it also enables us to construct a mixture of Gaussians to draw latent space random vectors. We use the Gaussian mixture model in tandem with a simple augmentation of the objective function to train GANs. Every major step of our method is supported by theoretical analysis, and our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features.",
"title": ""
},
{
"docid": "a5c58dbcbf2dc9c298f5fda2721f87a0",
"text": "The purpose of this study was to investigate how university students perceive their involvement in the cyberbullying phenomenon, and its impact on their well-being. Thus, this study presents a preliminary approach of how college students’ perceived involvement in acts of cyberbullying can be measured. Firstly, Exploratory Factor Analysis (N = 349) revealed a unidimensional structure of the four scales included in the Cyberbullying Inventory for College Students. Then, Item Response Theory (N = 170) was used to analyze the unidimensionality of each scale and the interactions between participants and items. Results revealed good item reliability and Cronbach’s a for each scale. Results also showed the potential of the instrument and how college students underrated their involvement in acts of cyberbullying. Additionally, aggression types, coping strategies and sources of help to deal with cyberbullying were identified and discussed. Lastly, age, gender and course-related issues were considered in the analysis. Implications for researchers and practitioners are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "adeebdc680819ca992f9d53e4866122a",
"text": "Large numbers of black kites (Milvus migrans govinda) forage with house crows (Corvus splendens) at garbage dumps in many Indian cities. Such aggregation of many individuals results in aggressiveness where adoption of a suitable behavioral approach is crucial. We studied foraging behavior of black kites in dumping sites adjoining two major corporation markets of Kolkata, India. Black kites used four different foraging tactics which varied and significantly influenced foraging attempts and their success rates. Kleptoparasitism was significantly higher than autonomous foraging events; interspecific kleptoparasitism was highest in occurrence with a low success rate, while ‘autonomous-ground’ was least adopted but had the highest success rate.",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "df27cb7c7ab82ef44aebfeb45d6c3cf1",
"text": "Nowadays, data is created by humans as well as automatically collected by physical things, which embed electronics, software, sensors and network connectivity. Together, these entities constitute the Internet of Things (IoT). The automated analysis of its data can provide insights into previously unknown relationships between things, their environment and their users, facilitating an optimization of their behavior. Especially the real-time analysis of data, embedded into physical systems, can enable new forms of autonomous control. These in turn may lead to more sustainable applications, reducing waste and saving resources. IoT’s distributed and dynamic nature, resource constraints of sensors and embedded devices as well as the amounts of generated data are challenging even the most advanced automated data analysis methods known today. In particular, the IoT requires a new generation of distributed analysis methods. Many existing surveys have strongly focused on the centralization of data in the cloud and big data analysis, which follows the paradigm of parallel high-performance computing. However, bandwidth and energy can be too limited for the transmission of raw data, or it is prohibited due to privacy constraints. Such communication-constrained scenarios require decentralized analysis algorithms which at least partly work directly on the generating devices. After listing data-driven IoT applications, in contrast to existing surveys, we highlight the differences between cloudbased and decentralized analysis from an algorithmic perspective. We present the opportunities and challenges of research on communication-efficient decentralized analysis algorithms. Here, the focus is on the difficult scenario of vertically partitioned data, which covers common IoT use cases. The comprehensive bibliography aims at providing readers with a good starting point for their own work.",
"title": ""
},
{
"docid": "731a3a94245b67df3e362ac80f41155f",
"text": "Opportunistic networking offers many appealing application perspectives from local social-networking applications to supporting communications in remote areas or in disaster and emergency situations. Yet, despite the increasing penetration of smartphones, opportunistic networking is not feasible with most popular mobile devices. There is still no support for WiFi Ad-Hoc and protocols such as Bluetooth have severe limitations (short range, pairing). We believe that WiFi Ad-Hoc communication will not be supported by most popular mobile OSes (i.e., iOS and Android) and that WiFi Direct will not bring the desired features. Instead, we propose WiFi-Opp, a realistic opportunistic setup relying on (i) open stationary APs and (ii) spontaneous mobile APs (i.e., smartphones in AP or tethering mode), a feature used to share Internet access, which we use to enable opportunistic communications. We compare WiFi-Opp to WiFi Ad-Hoc by replaying real-world contact traces and evaluate their performance in terms of capacity for content dissemination as well as energy consumption. While achieving comparable throughput, WiFi-Opp is up to 10 times more energy efficient than its Ad-Hoc counterpart. Eventually, a proof of concept demonstrates the feasibility of WiFi-Opp, which opens new perspectives for opportunistic networking.",
"title": ""
}
] |
scidocsrr
|
9711dfa77aaad4d6223d8ab145ad4f7f
|
Antenna-in-Package and Transmit–Receive Switch for Single-Chip Radio Transceivers of Differential Architecture
|
[
{
"docid": "6d70ac4457983c7df8896a9d31728015",
"text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications",
"title": ""
},
{
"docid": "277919545c003c0c2a266ace0d70de03",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
}
] |
[
{
"docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5",
"text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.",
"title": ""
},
{
"docid": "005c1b8d6ca23a4ba2d315d2e541dba7",
"text": "This paper proposes a satellite receiver filter design using FIR digital filtering technique. We present various design methods like windowing, least squares and equiripple for satellite burst demodulator application and compare their performance. Various designs of FIR filter are compared from the view point of hardware complexity, frequency response characteristics and implementation strategies. The filter is designed for band pass of the frequency range of 100 MHz to 500 MHz suitable for the entire bandwidth of satellite transponder. The burst mode detector requires narrow passband to increase SNR for preamble portion. When acquisition phase is complete, the bandpass is increased to full bandwidth of the signal.",
"title": ""
},
{
"docid": "5318baa10a6db98a0f31c6c30fdf6104",
"text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.",
"title": ""
},
{
"docid": "9c97262605b3505bbc33c64ff64cfcd5",
"text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.",
"title": ""
},
{
"docid": "c79be5b8b375a9bced1bfe5c3f9024ce",
"text": "Recent technological advances have enabled DNA methylation to be assayed at single-cell resolution. However, current protocols are limited by incomplete CpG coverage and hence methods to predict missing methylation states are critical to enable genome-wide analyses. We report DeepCpG, a computational approach based on deep neural networks to predict methylation states in single cells. We evaluate DeepCpG on single-cell methylation data from five cell types generated using alternative sequencing protocols. DeepCpG yields substantially more accurate predictions than previous methods. Additionally, we show that the model parameters can be interpreted, thereby providing insights into how sequence composition affects methylation variability.",
"title": ""
},
{
"docid": "1d72e3bbc8106a8f360c05bd0a638f0d",
"text": "Advancements in computer vision, natural language processing and deep learning techniques have resulted in the creation of intelligent systems that have achieved impressive results in the visually grounded tasks such as image captioning and visual question answering (VQA). VQA is a task that can be used to evaluate a system's capacity to understand an image. It requires an intelligent agent to answer a natural language question about an image. The agent must ground the question into the image and return a natural language answer. One of the latest techniques proposed to tackle this task is the attention mechanism. It allows the agent to focus on specific parts of the input in order to answer the question. In this paper we propose a novel long short-term memory (LSTM) architecture that uses dual attention to focus on specific question words and parts of the input image in order to generate the answer. We evaluate our solution on the recently proposed Visual 7W dataset and show that it performs better than state of the art. Additionally, we propose two new question types for this dataset in order to improve model evaluation. We also make a qualitative analysis of the results and show the strength and weakness of our agent.",
"title": ""
},
{
"docid": "38a4b3c515ee4285aa88418b30937c62",
"text": "Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.",
"title": ""
},
{
"docid": "7c1fafba892be56bb81a59df996bd95f",
"text": "Cowper's gland syringocele is an uncommon, underdiagnosed cystic dilatation of Cowper's gland ducts showing various radiological patterns. Herein we report a rare case of giant Cowper's gland syringocele in an adult male patient, with description of MRI findings and management outcome.",
"title": ""
},
{
"docid": "109c5caa55d785f9f186958f58746882",
"text": "Apriori and Eclat are the best-known basic algorithms for mining frequent item sets in a set of transactions. In this paper I describe implementations of these two algorithms that use several optimizations to achieve maximum performance, w.r.t. both execution time and memory usage. The Apriori implementation is based on a prefix tree representation of the needed counters and uses a doubly recursive scheme to count the transactions. The Eclat implementation uses (sparse) bit matrices to represent transactions lists and to filter closed and maximal item sets.",
"title": ""
},
{
"docid": "5f330c46df15da0b0d932590a1a773a9",
"text": "During the past decade, the alexithymia construct has undergone theoretical refinement and empirical testing and has evolved into a potential new paradigm for understanding the influence of emotions and personality on physical illness and health. Like the traditional psychosomatic medicine paradigm, the alexithymia construct links susceptibility to disease with prolonged states of emotional arousal. But whereas the traditional paradigm emphasizes intrapsychic conflicts that are presumed to generate such emotional states, the alexithymia construct focuses attention on deficits in the cognitive processing of emotions, which remain undifferentiated and poorly regulated. This paper reviews the development and validation of the construct and discusses its clinical implications for psychosomatic medicine.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "1685d4c4a61bcb7f9f61d1e0d9fd1241",
"text": "A review of recent research addressed two questions: how common are problems of substance abuse in traumatic brain injury (TBI), and to what extent does alcohol and other drug use mediate outcome? Studies showed alcohol intoxication present in one third to one half of hospitalizations; data for other drug intoxication were not available. Nearly two thirds of rehabilitation patients may have a history of substance abuse that preceded their injuries. Intoxication was related to acute complications, longer hospital stays, and poorer discharge status; however, these relationships may have been caused by colinearity with history. History of substance abuse showed the same morbidity, and was further associated with higher mortality rates, poorer neuropsychological outcome, and greater likelihood of repeat injuries and late deterioration. The effect of history may be caused by subgroups with more severe substance abuse problems. Implications for rehabilitation are discussed, including the potential negative impact of untreated substance abuse on the ability to document efficacy of rehabilitation efforts.",
"title": ""
},
{
"docid": "7398b6a56fa55098e7cf36ca3f14db48",
"text": "The World Health Organization projects that the number of people living in cities will nearly double over the next few decades, so urban centers need to provide more sustainable solutions for smart living. New technologies— such as materials, sensors, wireless communications, and controls—will be necessary to manage living environments that proactively sense behavioral and health risks and provide situationaware responses t o emergencies or disasters. In addition, utility and transportation networks must adapt to dynamic usage, tra c conditions, and user behavior with a minimal carbon footprint; a clean and renewable energy grid must actuate localized energy and power control; and pervasive security is needed to detect and prevent potential threats. This vision is bold but critical to enabling smart living. Cloud-only models face serious challenges in latency, network bandwidth, geographic focus, reliability, and security. Fog computing reduces these challenges by providing a systemlevel horizontal architecture to distribute computing, storage, control, and networking resources and services from the cloud to connected devices (“things”). Think of fog computing as the cloud on the ground: it enables latency-sensitive computing to be performed in close proximity to the things it controls. Over time, fog and cloud computing will converge into uni ed end-to-end platforms o ering integrated services and applications along the continuum from the cloud to things. Applications developed and deployed for the cloud will be able to run in fog and vice versa.",
"title": ""
},
{
"docid": "d62c50e109195f483119ebe36350ff54",
"text": "We address the problem of inferring users’ interests from microblogging sites such as Twitter, based on their utterances and interactions in the social network. Inferring user interests is important for systems such as search and recommendation engines to provide information that is more attuned to the likes of its users. In this paper, we propose a probabilistic generative model of user utterances that encapsulates both user and network information. This model captures the complex interactions between varied interests of the users, his level of activeness in the network, and the information propagation from the neighbors. As exact probabilistic inference in this model is intractable, we propose an online variational inference algorithm that also takes into account evolving social graph, user and his neighbors? interests. We prove the optimality of the online inference with respect to an equivalent batch update. We present experimental results performed on the actual Twitter users, validating our approach. We also present extensive results showing inadequacy of using Mechanical Turk platform for large scale validation.",
"title": ""
},
{
"docid": "e3863f0dc86fd194342c050df45f6e95",
"text": "This paper opened the new area the information theory. Before this paper, most people believed that the only way to make the error probability of transmission as small as desired is to reduce the data rate (such as a long repetition scheme). However, surprisingly this paper revealed that it does not need to reduce the data rate for achieving that much of small errors. It proved that we can get some positive data rate that has the same small error probability and also there is an upper bound of the data rate, which means we cannot achieve the data rate with any encoding scheme that has small enough error probability over the upper bound.",
"title": ""
},
{
"docid": "c553ea1a03550bdc684dbacbb9bef385",
"text": "NeuCoin is a decentralized peer-to-peer cryptocurrency derived from Sunny King’s Peercoin, which itself was derived from Satoshi Nakamoto’s Bitcoin. As with Peercoin, proof-of-stake replaces proof-of-work as NeuCoin’s security model, effectively replacing the operating costs of Bitcoin miners (electricity, computers) with the capital costs of holding the currency. Proof-of-stake also avoids proof-of-work’s inherent tendency towards centralization resulting from competition for coinbase rewards among miners based on lowest cost electricity and hash power. NeuCoin increases security relative to Peercoin and other existing proof-of-stake currencies in numerous ways, including: (1) incentivizing nodes to continuously stake coins over time through substantially higher mining rewards and lower minimum stake age; (2) abandoning the use of coin age in the mining formula; (3) causing the stake modifier parameter to change over time for each stake; and (4) utilizing a client that punishes nodes that attempt to mine on multiple branches with duplicate stakes. This paper demonstrates how NeuCoin’s proof-of-stake implementation addresses all commonly raised “nothing at stake” objections to generic proof-of-stake systems. It also reviews many of the flaws of proof-of-work designs to highlight the potential for an alternate cryptocurrency that solves these flaws.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "b33e896a23f27a81f04aaeaff2f2350c",
"text": "Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.",
"title": ""
},
{
"docid": "61ae981007d9ad7ba5499c434b17c371",
"text": "of a dissertation at the University of Miami. Dissertation supervised by Professor Mei-Ling Shyu. No. of pages in text. (160) With the proliferation of digital photo-capture devices and the development of web technologies, the era of big data has arrived, which poses challenges to process and retrieve vast amounts of data with heterogeneous and diverse dimensionality. In the field of multimedia information retrieval, traditional keyword-based approaches perform well on text data, but it can hardly adapt to image and video due to the fact that a large proportion of this data nowadays is unorganized. This means the textual descriptions of images or videos, also known as metadata, could be unavailable, incomplete or even incorrect. Therefore, Content-Based Multimedia Information Retrieval (CBMIR) has emerged, which retrieves relevant images or videos by analyzing their visual content. Various data mining techniques such as feature selection, classification, clustering and filtering, have been utilized in CBMIR to solve issues involving data imbalance, data quality and size, limited ground truth, user subjectivity, etc. However, as an intrinsic problem of CBMIR, the semantic gap between low-level visual features and high-level semantics is still difficult to conquer. Now, with the rapid popularization of social media repositories, which allows users to upload images and videos, and assign tags to describe them, it has brought new directions as well as new challenges to the area of multimedia information retrieval. As suggested by the name, multimedia is a combination of different content forms that include text, audio, images, videos, etc. A series of research studies have been conducted to take advantage of one modality to compensate the other for",
"title": ""
}
] |
scidocsrr
|
7c864c37d20aa08948af106b46b42ca3
|
UA-DETRAC 2017 : Report of AVSS 2017 & IT 4 S Challenge on Advance Traffic Monitoring
|
[
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
},
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] |
[
{
"docid": "5a8d4bfb89468d432b7482062a0cbf2d",
"text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
},
{
"docid": "17e761f30e9f8cffa84a5a2c142e4665",
"text": "In this paper, a neural-dynamic optimization-based nonlinear model predictive control (NMPC) is developed for controlling leader-follower mobile robots formation. Consider obstacles in the environments, a control strategy is proposed for the formations which includes separation-bearing-orientation scheme (SBOS) for regular leader-follower formation and separation-distance scheme (SDS) for obstacle avoidance. During the formation motion, the leader robot shall track a desired trajectory and the desire leader-follower relationship can be maintained through SBOS method; meanwhile, the followers can avoid the collision by applying the SDS. The formation-error kinematics of both SBOS and SDS are derived and a constrained quadratic programming (QP) can be obtained by transforming the MPC method. Then, over a finite-receding horizon, the QP problem can be solved by utilizing the primal-dual neural network (PDNN) with parallel capability. The computation complexity can be greatly reduced by the implemented neural-dynamic optimization. Compared with other existing formation control approaches, the developed solution in this paper is rooted in NMPC techniques with input constraints and the novel QP problem formulation. Finally, experimental studies of the proposed formation control approach have been performed on several mobile robots to verify the effectiveness.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "338efe667e608779f4f41d1cdb1839bb",
"text": "In ASP.NET, Programmers maybe use POST or GET to pass parameter's value. Two methods are easy to come true. But In ASP.NET, It is not easy to pass parameter's value. In ASP.NET, Programmers maybe use many methods to pass parameter's value, such as using Application, Session, Querying, Cookies, and Forms variables. In this paper, by way of pass value from WebForm1.aspx to WebForm2.aspx and show out the value on WebForm2. We can give and explain actually examples in ASP.NET language to introduce these methods.",
"title": ""
},
{
"docid": "643be78202e4d118e745149ed389b5ef",
"text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.",
"title": ""
},
{
"docid": "8f0073815a64e4f5d3e4e8cb9290fa65",
"text": "In this paper, we investigate the benefits of applying a form of network coding known as random linear coding (RLC) to unicast applications in disruption-tolerant networks (DTNs). Under RLC, nodes store and forward random linear combinations of packets as they encounter each other. For the case of a single group of packets originating from the same source and destined for the same destination, we prove a lower bound on the probability that the RLC scheme achieves the minimum time to deliver the group of packets. Although RLC significantly reduces group delivery delays, it fares worse in terms of average packet delivery delay and network transmissions. When replication control is employed, RLC schemes reduce group delivery delays without increasing the number of transmissions. In general, the benefits achieved by RLC are more significant under stringent resource (bandwidth and buffer) constraints, limited signaling, highly dynamic networks, and when applied to packets in the same flow. For more practical settings with multiple continuous flows in the network, we show the importance of deploying RLC schemes with a carefully tuned replication control in order to achieve reduction in average delay, which is observed to be as large as 20% when buffer space is constrained.",
"title": ""
},
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
},
{
"docid": "9696e2f6ff6e16f378ae377798ee3332",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: jnchen06@163.com (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7fe6b1ba27c13c95d1a48ca401e25fd",
"text": "BACKGROUND\nselecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD).\n\n\nORDINAL-TO-INTERVAL SCALE CONVERSION EXAMPLE\na breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests.\n\n\nRESULTS\nthe sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable.\n\n\nCONCLUSION\nby using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "e457ab9e14f6fa104a15421d9263815a",
"text": "Many aquaculture systems generate high amounts of wastewater containing compounds such as suspended solids, total nitrogen and total phosphorus. Today, aquaculture is imperative because fish demand is increasing. However, the load of waste is directly proportional to the fish production. Therefore, it is necessary to develop more intensive fish culture with efficient systems for wastewater treatment. A number of physical, chemical and biological methods used in conventional wastewater treatment have been applied in aquaculture systems. Constructed wetlands technology is becoming more and more important in recirculating aquaculture systems (RAS) because wetlands have proven to be well-established and a cost-effective method for treating wastewater. This review gives an overview about possibilities to avoid the pollution of water resources; it focuses initially on the use of systems combining aquaculture and plants with a historical review of aquaculture and the treatment of its effluents. It discusses the present state, taking into account the load of pollutants in wastewater such as nitrates and phosphates, and finishes with recommendations to prevent or at least reduce the pollution of water resources in the future.",
"title": ""
},
{
"docid": "a2d699f3c600743c732b26071639038a",
"text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.",
"title": ""
},
{
"docid": "29d08d266bc84ba761283bb8ae827d0b",
"text": "Statistical classifiers typically build (parametric) probabilistic models of the training data, and compute the probability that an unknown sample belongs to each of the possible classes using these models. We utilize two established measures to compare the performance of statistical classifiers namely; classification accuracy (or error rate) and the area under ROC. Naïve Bayes has obtained much relevance in data classification for machine learning and datamining. In our work, a comparative analysis of the accuracy performance of statistical classifiers namely Naïve Bayes (NB), MDL discretized NB, 4 different variants of NB and 8 popular non-NB classifiers was carried out on 21 medical datasets using classification accuracy and true positive rate. Our results indicate that the classification accuracy of Naïve Bayes (MDL discretized) on the average is the best performer. The significance of this work through the results of the comparative analysis, we are of the opinion that medical datamining with generative methods like Naïve Bayes is computationally simple yet effective and are to be used whenever possible as the benchmark for statistical classifiers.",
"title": ""
},
{
"docid": "f18a19159e71e4d2a92a465217f93366",
"text": "Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.",
"title": ""
},
{
"docid": "eb083b4c46d49a6cc639a89b74b1f269",
"text": "ROC analyses generated low area under the curve (.695, 95% confidence interval (.637.752)) and cutoff scores with poor sensitivity/specificity balance. BDI-II. Because the distribution of BDI-II scores was not normal, percentile ranks for raw scores were provided for the total sample and separately by gender. symptoms two scales were used: The Beck Depression Inventory-II (BDIII) smokers and non smokers, we found that the mean scores on the BDI-II (9.21 vs.",
"title": ""
},
{
"docid": "4855ecd626160518339ee2caf8f9c2cf",
"text": "The Metamorphoses Greek myth includes a story about a woman raised as a male falling in love with another woman, and being transformed into a man prior to a wedding ceremony and staying with her. It is therefore considered that people who desire to live as though they have the opposite gender have existed since ancient times. People who express a sense of discomfort with their anatomical sex and related roles have been reported in the medical literature since the middle of the 19th century. However, homosexual, fetishism, gender identity disorder, and associated conditions were mixed together and regarded as types of sexual perversion that were considered ethically objectionable until the 1950s. The first performance of sex-reassignment surgery in 1952 attracted considerable attention, and the sexologist Harry Benjamin reported a case of 'a woman kept in the body of a man', which was called transsexualism. John William Money studied the sexual consciousness about disorders of sex development and advocated the concept of gender in 1957. Thereafter the disparity between anatomical sex and gender identity was referred to as the psychopathological condition of gender identity disorder, and this was used for its diagnostic name when it was introduced into DSM-III in 1980. However, gender identity disorder encompasses a spectrum of conditions, and DSM-III -R categorized it into three types: transsexualism, nontranssexualism, and not otherwise specified. The first two types were subsequently combined and standardized into the official diagnostic name of 'gender identity disorder' in DSM-IV. In contrast, gender identity disorder was categorized into four groups (including transsexualism and dual-role transvestism) in ICD-10. A draft proposal of DSM-5 has been submitted, in which the diagnostic name of gender identity disorder has been changed to gender dysphoria. Also, it refers to 'assigned gender' rather than to 'sex', and includes disorders of sexual development. Moreover, the subclassifications regarding sexual orientation have been deleted. The proposed DSM-5 reflects an attempt to include only a medical designation of people who have suffered due to the gender disparity, thereby respecting the concept of transgender in accepting the diversity of the role of gender. This indicates that transgender issues are now at a turning point.",
"title": ""
},
{
"docid": "f715f471118b169502941797d17ceac6",
"text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.",
"title": ""
},
{
"docid": "7e8b58b88a1a139f9eb6642a69eb697a",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0b9dde7982cf2b99a979dbc0d6dfceba",
"text": "PURPOSE\nTo develop a reliable and valid questionnaire of bilingual language status with predictable relationships between self-reported and behavioral measures.\n\n\nMETHOD\nIn Study 1, the internal validity of the Language Experience and Proficiency Questionnaire (LEAP-Q) was established on the basis of self-reported data from 52 multilingual adult participants. In Study 2, criterion-based validity was established on the basis of standardized language tests and self-reported measures from 50 adult Spanish-English bilinguals. Reliability and validity of the questionnaire were established on healthy adults whose literacy levels were equivalent to that of someone with a high school education or higher.\n\n\nRESULTS\nFactor analyses revealed consistent factors across both studies and suggested that the LEAP-Q was internally valid. Multiple regression and correlation analyses established criterion-based validity and suggested that self-reports were reliable indicators of language performance. Self-reported reading proficiency was a more accurate predictor of first-language performance, and self-reported speaking proficiency was a more accurate predictor of second-language performance. Although global measures of self-reported proficiency were generally predictive of language ability, deriving a precise estimate of performance on a particular task required that specific aspects of language history be taken into account.\n\n\nCONCLUSION\nThe LEAP-Q is a valid, reliable, and efficient tool for assessing the language profiles of multilingual, neurologically intact adult populations in research settings.",
"title": ""
}
] |
scidocsrr
|
c159c06516b5e75bd8ea00789a521c43
|
A new posterolateral approach without fibula osteotomy for the treatment of tibial plateau fractures.
|
[
{
"docid": "f91007844639e431b2f332f6f32df33b",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
}
] |
[
{
"docid": "ea5a07b07631248a2f5cbee80420924d",
"text": "Coordinating fleets of autonomous, non-holonomic vehicles is paramount to many industrial applications. While there exists solutions to efficiently calculate trajectories for individual vehicles, an effective methodology to coordinate their motions and to avoid deadlocks is still missing. Decoupled approaches, where motions are calculated independently for each vehicle and then centrally coordinated for execution, have the means to identify deadlocks, but not to solve all of them. We present a novel approach that overcomes this limitation and that can be used to complement the deficiencies of decoupled solutions with centralized coordination. Here, we formally define an extension of the framework of lattice-based motion planning to multi-robot systems and we validate it experimentally. Our approach can jointly plan for multiple vehicles and it generates kinematically feasible and deadlock-free motions.",
"title": ""
},
{
"docid": "34667babdde26a81244c7e1c929e7653",
"text": "Noise level estimation is crucial in many image processing applications, such as blind image denoising. In this paper, we propose a novel noise level estimation approach for natural images by jointly exploiting the piecewise stationarity and a regular property of the kurtosis in bandpass domains. We design a $K$ -means-based algorithm to adaptively partition an image into a series of non-overlapping regions, each of whose clean versions is assumed to be associated with a constant, but unknown kurtosis throughout scales. The noise level estimation is then cast into a problem to optimally fit this new kurtosis model. In addition, we develop a rectification scheme to further reduce the estimation bias through noise injection mechanism. Extensive experimental results show that our method can reliably estimate the noise level for a variety of noise types, and outperforms some state-of-the-art techniques, especially for non-Gaussian noises.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "c8d690eb4dd2831f28106c3cfca4552c",
"text": "While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.",
"title": ""
},
{
"docid": "eec886c9c758e90acc4b97df85057b61",
"text": "A full-term male foal born in a farm holidays in Maremma (Tuscany, Italy) was euthanized shortly after birth due to the presence of several malformations. The rostral maxilla and the nasal septum were deviated to the right (wry nose), and a severe cervico-thoracic scoliosis and anus atresia were evident. Necropsy revealed ileum atresia and agenesis of the right kidney. The brain showed an incomplete separation of the hemispheres of the rostral third of the forebrain and the olfactory bulbs and tracts were absent (olfactory aplasia). A diagnosis of semilobar holoprosencephaly (HPE) was achieved. This is the first case of semilobar HPE associated with other organ anomalies in horses.",
"title": ""
},
{
"docid": "83709dc50533c28221d89490bcb3a5aa",
"text": "Hyperspectral image classification has attracted extensive research efforts in the recent decades. The main difficulty lies in the few labeled samples versus high dimensional features. The spectral-spatial classification method using Markov random field (MRF) has been shown to perform well in improving the classification performance. Moreover, active learning (AL), which iteratively selects the most informative unlabeled samples and enlarges the training set, has been widely studied and proven useful in remotely sensed data. In this paper, we focus on the combination of MRF and AL in the classification of hyperspectral images, and a new MRF model-based AL (MRF-AL) framework is proposed. In the proposed framework, the unlabeled samples whose predicted results vary before and after the MRF processing step is considered as uncertain. In this way, subset is firstly extracted from the entire unlabeled set, and AL process is then performed on the samples in the subset. Moreover, hybrid AL methods which combine the MRF-AL framework with either the passive random selection method or the existing AL methods are investigated. To evaluate and compare the proposed AL approaches with other state-of-the-art techniques, experiments were conducted on two hyperspectral data sets. Results demonstrated the effectiveness of the hybrid AL methods, as well as the advantage of the proposed MRF-AL framework.",
"title": ""
},
{
"docid": "a436bdc20d63dcf4f0647005bb3314a7",
"text": "The purpose of this study is to evaluate the feasibility of the integration of concept maps and tablet PCs in anti-phishing education for enhancing students’ learning motivation and achievement. The subjects were 155 students from grades 8 and 9. They were divided into an experimental group (77 students) and a control group (78 students). To begin with, the two groups received identical anti-phishing training: the teacher explained the concept of anti-phishing and asked the students questions; the students then used tablet PCs for polling and answering the teachers’ questions. Afterwards, the two groups performed different group activities: the experimental group was divided into smaller groups, which used tablet PCs to draw concept maps; the control group was also divided into groups which completed worksheets. The study found that the use of concept maps on tablet PCs during the anti-phishing education significantly enhanced the students’ learning motivation when their initial motivation was already high. For learners with low initial motivation or prior knowledge, the use of worksheets could increase their posttest achievement and motivation. This study therefore proposes that motivation and achievement in teaching the anti-phishing concept can be effectively enhanced if the curriculum is designed based on the students’ learning preferences or prior knowledge, in conjunction with the integration of mature and accessible technological media into the learning activities. The findings can also serve as a reference for anti-phishing educators and researchers.",
"title": ""
},
{
"docid": "cc3f821bd9617d31a8b303c4982e605f",
"text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.",
"title": ""
},
{
"docid": "b134cf07e01f1568d127880777492770",
"text": "This paper addresses the problem of recovering 3D nonrigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanades’ factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
},
{
"docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
},
{
"docid": "914b38c4a5911a481bf9088f75adef30",
"text": "This paper presents a mixed-integer LP approach to the solution of the long-term transmission expansion planning problem. In general, this problem is large-scale, mixed-integer, nonlinear, and nonconvex. We derive a mixed-integer linear formulation that considers losses and guarantees convergence to optimality using existing optimization software. The proposed model is applied to Garver’s 6-bus system, the IEEE Reliability Test System, and a realistic Brazilian system. Simulation results show the accuracy as well as the efficiency of the proposed solution technique.",
"title": ""
},
{
"docid": "dae2ef494ca779e701288414e1cbf0ef",
"text": "API example code search is an important applicationin software engineering. Traditional approaches to API codesearch are based on information retrieval. Recent advance inWord2Vec has been applied to support the retrieval of APIexamples. In this work, we perform a preliminary study thatcombining traditional IR with Word2Vec achieves better retrievalaccuracy. More experiments need to be done to study differenttypes of combination among two lines of approaches.",
"title": ""
},
{
"docid": "a2253bf241f7e5f60e889258e4c0f40c",
"text": "BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",
"title": ""
},
{
"docid": "e584549afba4c444c32dfe67ee178a84",
"text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d8fc5a8bc075343b2e70a9b441ecf6e5",
"text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.",
"title": ""
},
{
"docid": "0056d305c7689d45e7cd9f4b87cac79e",
"text": "A method is presented that uses a vectorial multiscale feature image for wave front propagation between two or more user defined points to retrieve the central axis of tubular objects in digital images. Its implicit scale selection mechanism makes the method more robust to overlap and to the presence of adjacent structures than conventional techniques that propagate a wave front over a scalar image representing the maximum of a range of filters. The method is shown to retain its potential to cope with severe stenoses or imaging artifacts and objects with varying widths in simulated and actual two-dimensional angiographic images.",
"title": ""
},
{
"docid": "844dcf80b2feba89fced99a0f8cbe9bf",
"text": "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents cannot differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely helps, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in a variety of cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods.",
"title": ""
},
{
"docid": "f4fb632268bbbf76878472183c511b05",
"text": "Mid-way through the 2007 DARPA Urban Challenge, MIT’s autonomous Land Rover LR3 ‘Talos’ and Team Cornell’s autonomous Chevrolet Tahoe ‘Skynet’ collided in a low-speed accident, one of the first well-documented collisions between two full-size autonomous vehicles. This collaborative study between MIT and Cornell examines the root causes of the collision, which are identified in both teams’ system designs. Systems-level descriptions of both autonomous vehicles are given, and additional detail is provided on sub-systems and algorithms implicated in the collision. A brief summary of robot–robot interactions during the race is presented, followed by an in-depth analysis of both robots’ behaviors leading up to and during the Skynet–Talos collision. Data logs from the vehicles are used to show the gulf between autonomous and human-driven vehicle behavior at low speeds and close proximities. Contributing factors are shown to be: (1) difficulties in sensor data association leading to phantom obstacles and an inability to detect slow moving vehicles, (2) failure to anticipate vehicle intent, and (3) an over emphasis on lane constraints versus vehicle proximity in motion planning. Eye contact between human road users is a crucial communications channel for slow-moving close encounters between vehicles. Inter-vehicle communication may play a similar role for autonomous vehicles; however, there are availability and denial-of-service issues to be addressed.",
"title": ""
},
{
"docid": "6ed4d5ae29eef70f5aae76ebed76b8ca",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.