query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
39ec7fb96995c0800bc415c55d78a670
|
Variables associated with achievement in higher education: A systematic review of meta-analyses.
|
[
{
"docid": "4147b26531ca1ec165735688481d2684",
"text": "Problem-based approaches to learning have a long history of advocating experience-based education. Psychological research and theory suggests that by having students learn through the experience of solving problems, they can learn both content and thinking strategies. Problem-based learning (PBL) is an instructional method in which students learn through facilitated problem solving. In PBL, student learning centers on a complex problem that does not have a single correct answer. Students work in collaborative groups to identify what they need to learn in order to solve a problem. They engage in self-directed learning (SDL) and then apply their new knowledge to the problem and reflect on what they learned and the effectiveness of the strategies employed. The teacher acts to facilitate the learning process rather than to provide knowledge. The goals of PBL include helping students develop 1) flexible knowledge, 2) effective problem-solving skills, 3) SDL skills, 4) effective collaboration skills, and 5) intrinsic motivation. This article discusses the nature of learning in PBL and examines the empirical evidence supporting it. There is considerable research on the first 3 goals of PBL but little on the last 2. Moreover, minimal research has been conducted outside medical and gifted education. Understanding how these goals are achieved with less skilled learners is an important part of a research agenda for PBL. The evidence suggests that PBL is an instructional approach that offers the potential to help students develop flexible understanding and lifelong learning skills.",
"title": ""
},
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
}
] |
[
{
"docid": "2d9d6dbe1d841b9a87284c6a736bcb0c",
"text": "The loosely defined terms hard fork and soft fork have established themselves as descriptors of different classes of upgrade mechanisms for the underlying consensus rules of (proof-of-work) blockchains. Recently, a novel approach termed velvet fork, which expands upon the concept of a soft fork, was outlined in [22]. Specifically, velvet forks intend to avoid the possibility of disagreement by a change of rules through rendering modifications to the protocol backward compatible and inclusive to legacy blocks. We present an overview and definitions of these different upgrade mechanisms and outline their relationships. Hereby, we expose examples where velvet forks or similar constructions are already actively employed in Bitcoin and other cryptocurrencies. Furthermore, we expand upon the concept of velvet forks by proposing possible applications and discuss potentially arising security implications.",
"title": ""
},
{
"docid": "fcbc3b91c6cd501ddbfed2f93e65e73d",
"text": "Question answering is an important and difficult task in the natural language processing domain, because many basic natural language processing tasks can be cast into a question answering task. Several deep neural network architectures have been developed recently, which employ memory and inference components to memorize and reason over text information, and generate answers to questions. However, a major drawback of many such models is that they are capable of only generating single-word answers. In addition, they require large amount of training data to generate accurate answers. In this paper, we introduce the LongTerm Memory Network (LTMN), which incorporates both an external memory module and a Long Short-Term Memory (LSTM) module to comprehend the input data and generate multi-word answers. The LTMN model can be trained end-to-end using back-propagation and requires minimal supervision. We test our model on two synthetic data sets (based on Facebook’s bAbI data set) and the real-world Stanford question answering data set, and show that it can achieve state-of-the-art performance.",
"title": ""
},
{
"docid": "098625ba59c97d704ae85aa2e6776919",
"text": "A CDTA-based quadrature oscillator circuit is proposed. The circuit employs two current-mode allpass sections in a loop, and provides high-frequency sinusoidal oscillations in quadrature at high impedance output terminals of the CDTAs. The circuit has no floating capacitors, which is advantageous from the integrated circuit manufacturing point of view. Moreover, the oscillation frequency of this configuration can be made adjustable by using voltage controlled elements (MOSFETs), since the resistors in the circuit are either grounded or virtually grounded.",
"title": ""
},
{
"docid": "30a5bfd8afce6ba1f8259a51773c8be7",
"text": "Objectives The aim of this audit was to monitor the outcome of composite restorations placed at an increased vertical dimension in patients with severe tooth wear.Methods This convenience sample of patients were treated by 11 specialist trainees in prosthodontics, and restored with direct composites. Exclusion criteria included bruxism, poor medical health and a preference for monitoring rather than intervention. The restorations were placed between 2012 and 2016 and were placed over more than one appointment and the outcome monitored for up to 14 months. Failure was assessed at a binary level, either success or failure (minor or major).Results A total of 35 patients with a mean age of 45 years (range 24–86), 27 of whom were male, received 251 restorations placed from November 2012 to November 2016. The patients had a mean of 11.51 (range 4 to 16) occluding pairs of teeth. There was a total of 40 restoration failures (17%) which was an 83% success rate based on the total number of restorations. For the patient-based data, 14 patients (39%) had no chips or bulk factures while 22 (61%) patients had failures, of which 60% were chips and 40% bulk fractures.Conclusion Restoration of worn teeth with composites is associated with a high incidence of fractures.Clinical significance The restoration of worn teeth with composite can involve regular maintenance following fractures and patients need to be aware of this when giving consent.",
"title": ""
},
{
"docid": "984dc75b97243e448696f2bf0ba3c2aa",
"text": "Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f835074be8ff74361f1ea700ae737ace",
"text": "Exploring community is fundamental for uncovering the connections between structure and function of complex networks and for practical applications in many disciplines such as biology and sociology. In this paper, we propose a TTR-LDA-Community model which combines the Latent Dirichlet Allocation model (LDA) and the Girvan-Newman community detection algorithm with an inference mechanism. The model is then applied to data from Delicious, a popular social tagging system, over the time period of 2005-2008. Our results show that 1) users in the same community tend to be interested in similar set of topics in all time periods; and 2) topics may divide into several sub-topics and scatter into different communities over time. We evaluate the effectiveness of our model and show that the TTR-LDA-Community model is meaningful for understanding communities and outperforms TTR-LDA and LDA models in tag prediction.",
"title": ""
},
{
"docid": "629c6c7ca3db9e7cad2572c319ec52f0",
"text": "Recent research on pornography suggests that perception of addiction predicts negative outcomes above and beyond pornography use. Research has also suggested that religious individuals are more likely to perceive themselves to be addicted to pornography, regardless of how often they are actually using pornography. Using a sample of 686 unmarried adults, this study reconciles and expands on previous research by testing perceived addiction to pornography as a mediator between religiosity and relationship anxiety surrounding pornography. Results revealed that pornography use and religiosity were weakly associated with higher relationship anxiety surrounding pornography use, whereas perception of pornography addiction was highly associated with relationship anxiety surrounding pornography use. However, when perception of pornography addiction was inserted as a mediator in a structural equation model, pornography use had a small indirect effect on relationship anxiety surrounding pornography use, and perception of pornography addiction partially mediated the association between religiosity and relationship anxiety surrounding pornography use. By understanding how pornography use, religiosity, and perceived pornography addiction connect to relationship anxiety surrounding pornography use in the early relationship formation stages, we hope to improve the chances of couples successfully addressing the subject of pornography and mitigate difficulties in romantic relationships.",
"title": ""
},
{
"docid": "00dbe58bcb7d4415c01a07255ab7f365",
"text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.",
"title": ""
},
{
"docid": "da7058526e9b76988e20dae598124c53",
"text": "53BP1 is known as a mediator in DNA damage response and a regulator of DNA double-stranded breaks (DSBs) repair. 53BP1 was recently reported to be a centrosomal protein and a binding partner of mitotic polo-like kinase 1 (Plk1). The stability of 53BP1, in response to DSBs, is regulated by its phosphorylation, deubiquitination, and ubiquitination. During mitosis, 53BP1 is stabilized by phosphorylation at S380, a putative binding region with polo-box domain of Plk1, and deubiquitination by ubiquitin-specific protease 7 (USP7). In the absence of DSBs, 53BP1 is abundant in the nucleoplasm; DSB formation results in its rapid localization to the damaged chromatin. Mitotic 53BP1 is also localized at the centrosome and spindle pole. 53BP1 depletion induces mitotic defects such as disorientation of spindle poles attributed to extra centrosomes or mispositioning of centrosomes, leading to phenotypes similar to those in USP7-deficient cells. Here, we discuss how 53BP1 controls the centrosomal integrity through its interaction with USP7 and centromere protein F by regulation of its stability and its physiology in response to DNA damage.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "a41bc49e1207460facc5a43190849dca",
"text": "Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. Humans often describe their experiences through the event, temporal and causal structures they perceive. These structures are often expressed in textual forms, for example in timelines, where text is summarized by aligning events with the times at which they occurred. These same sorts of temporal-causal structures are also useful for a variety of computational tasks, like summarization and question answering. However, to reason over such structures they must first be extracted from their textual representations and organized into a machine readable form. This work demonstrates that various important parts of the event, temporal and causal structure of a text can be extracted automatically using machine learning methods. Events, which serve as the basic anchors of temporal and causal relations, can be extracted with F-measures in the 70s and 80s using a word-chunking approach. Temporal relations between adjacent events in some common syntactic constructions can be identified with almost 90% accuracy using pair-wise classification. Causal relations are much more difficult, but initial work suggests that even this task may become tractable to machine learning methods. Analyses of the various tasks lead to several conclusions about how best to approach the automatic extraction of temporal-causal structure. Tasks with little linguistic motivation had low agreement between humans and low machine learning model performance. Tasks with clear annotation guidelines based on known linguistic constructions had much higher inter-annotator agreement and much better model performance. Thus, future progress will depend on careful task selection guided by linguistic knowledge about how event, temporal and causal relations are expressed in text. Acknowledgements My deepest thanks to: My family, for a variety of emotional support, and for putting up with research terms that all too frequently slipped into otherwise pleasant conversations. Jim Martin, who has offered not just advice, but an opportunity to develop ideas together. I can't even count the times that I walked into his office with only a vague idea in my head, and walked out with a plan and several experiments to run. Matthew Woitaszek, for always being available for a random conversation about research problems, for serving as a great board to bounce ideas off of, and for being the thankless maintainer of the cluster on which many …",
"title": ""
},
{
"docid": "c590b5f84b08720b36622a0256565613",
"text": "Attempto Controlled English (ACE) allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specification texts in ACE into discourse representation structures and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.",
"title": ""
},
{
"docid": "94ce7e37a8a1cdfb73b7f3b5b4a4bbdf",
"text": "Thermal protection limits are equally important as mechanical specifications when designing electric drivetrains. However, properties of motor drives like mass/length of copper winding or heat dissipation factor are not available in producers’ catalogs. The lack of this essential data prevents the effective selection of drivetrain components and makes it necessary to consult critical design decisions with equipment's suppliers. Therefore, in this paper, the popular loadability curves that are available in catalogs become a basis to formulate a method that allows to estimate temperature rise of motor drives. The current technique allows for evaluating a temperature rise of a motor drive for any overload magnitude, duty cycle, and ambient temperature, contrary to using a discrete set of permissible overload conditions that are provided by manufacturers. The proposed approach is based on industrially adopted practices, greatly improves flexibility of a design process, and facilitates communication in a supplier–customer dialog.",
"title": ""
},
{
"docid": "b15793d40986da868efde0074d5fbfc9",
"text": "Recently, cellular operators have started migrating to IPv6 in response to the increasing demand for IP addresses. With the introduction of IPv6, cellular middleboxes, such as firewalls for preventing malicious traffic from the Internet and stateful NAT64 boxes for providing backward compatibility with legacy IPv4 services, have become crucial to maintain stability of cellular networks. This paper presents security problems of the currently deployed IPv6 middleboxes of five major operators. To this end, we first investigate several key features of the current IPv6 deployment that can harm the safety of a cellular network as well as its customers. These features combined with the currently deployed IPv6 middlebox allow an adversary to launch six different attacks. First, firewalls in IPv6 cellular networks fail to block incoming packets properly. Thus, an adversary could fingerprint cellular devices with scanning, and further, she could launch denial-of-service or over-billing attacks. Second, vulnerabilities in the stateful NAT64 box, a middlebox that maps an IPv6 address to an IPv4 address (and vice versa), allow an adversary to launch three different attacks: 1) NAT overflow attack that allows an adversary to overflow the NAT resources, 2) NAT wiping attack that removes active NAT mappings by exploiting the lack of TCP sequence number verification of firewalls, and 3) NAT bricking attack that targets services adopting IP-based blacklisting by preventing the shared external IPv4 address from accessing the service. We confirmed the feasibility of these attacks with an empirical analysis. We also propose effective countermeasures for each attack.",
"title": ""
},
{
"docid": "168f2c2b4e8bc52debf81eb800860cae",
"text": "Optimal reconfigurable hardware implementations may require the use of arbitrary floating-point formats that do not necessarily conform to IEEE specified sizes. We present a variable precision floating-point library (VFloat) that supports general floating-point formats including IEEE standard formats. Most previously published floating-point formats for use with reconfigurable hardware are subsets of our format. Custom datapaths with optimal bitwidths for each operation can be built using the variable precision hardware modules in the VFloat library, enabling a higher level of parallelism. The VFloat library includes three types of hardware modules for format control, arithmetic operations, and conversions between fixed-point and floating-point formats. The format conversions allow for hybrid fixed- and floating-point operations in a single design. This gives the designer control over a large number of design possibilities including format as well as number range within the same application. In this article, we give an overview of the components in the VFloat library and demonstrate their use in an implementation of the K-means clustering algorithm applied to multispectral satellite images.",
"title": ""
},
{
"docid": "ae0d63126ff55961533dc817554bcb82",
"text": "This paper presents a novel bipedal robot concept and prototype that takes inspiration from humanoids but features fundamental differences that drastically improve its agility and stability while reducing its complexity and cost. This Non-Anthropomorphic Bipedal Robotic System (NABiRoS) modifies the traditional bipedal form by aligning the legs in the sagittal plane and adding a compliance to the feet. The platform is comparable in height to a human, but weighs much less because of its lightweight architecture and novel leg configuration. The inclusion of the compliant element showed immense improvements in the stability and robustness of walking gaits on the prototype, allowing the robot to remain stable during locomotion without any inertial feedback control. NABiRoS was able to achieve walking speeds of up to 0.75km/h (0.21m/s) using a simple pre-processed ZMP based gait and a positioning accuracy of +/- 0.04m with a preprocessed quasi-static algorithm.",
"title": ""
},
{
"docid": "529ee26c337908488a5912835cc966c3",
"text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.",
"title": ""
},
{
"docid": "8adf698c03f01dced7d021cc103d51a4",
"text": "Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation in order to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to real-world ones. Using domain adaptation methods to cross this “reality gap” requires at best a large amount of unlabelled real-world data, whilst domain randomization alone can waste modeling power, rendering certain reinforcement learning (RL) methods unable to learn the task of interest. In this paper, we present Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zeroshot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, outperforming a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.",
"title": ""
}
] |
scidocsrr
|
6c4862cfa183d0dbb0e5ae84cd089947
|
On the Unfairness of Blockchain
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
}
] |
[
{
"docid": "5e58638e766904eb84380b53cae60df2",
"text": "BACKGROUND\nAneurysmal subarachnoid hemorrhage (SAH) accounts for 5% of strokes and carries a poor prognosis. It affects around 6 cases per 100,000 patient years occurring at a relatively young age.\n\n\nMETHODS\nCommon risk factors are the same as for stroke, and only in a minority of the cases, genetic factors can be found. The overall mortality ranges from 32% to 67%, with 10-20% of patients with long-term dependence due to brain damage. An explosive headache is the most common reported symptom, although a wide spectrum of clinical disturbances can be the presenting symptoms. Brain computed tomography (CT) allow the diagnosis of SAH. The subsequent CT angiography (CTA) or digital subtraction angiography (DSA) can detect vascular malformations such as aneurysms. Non-aneurysmal SAH is observed in 10% of the cases. In patients surviving the initial aneurysmal bleeding, re-hemorrhage and acute hydrocephalus can affect the prognosis.\n\n\nRESULTS\nAlthough occlusion of an aneurysm by surgical clipping or endovascular procedure effectively prevents rebleeding, cerebral vasospasm and the resulting cerebral ischemia occurring after SAH are still responsible for the considerable morbidity and mortality related to such a pathology. A significant amount of experimental and clinical research has been conducted to find ways in preventing these complications without sound results.\n\n\nCONCLUSIONS\nEven though no single pharmacological agent or treatment protocol has been identified, the main therapeutic interventions remain ineffective and limited to the manipulation of systemic blood pressure, alteration of blood volume or viscosity, and control of arterial dioxide tension.",
"title": ""
},
{
"docid": "af63f1e1efbb15f2f41a91deb6ec1e32",
"text": "OBJECTIVES\n: A systematic review of the literature to determine the ability of dynamic changes in arterial waveform-derived variables to predict fluid responsiveness and compare these with static indices of fluid responsiveness. The assessment of a patient's intravascular volume is one of the most difficult tasks in critical care medicine. Conventional static hemodynamic variables have proven unreliable as predictors of volume responsiveness. Dynamic changes in systolic pressure, pulse pressure, and stroke volume in patients undergoing mechanical ventilation have emerged as useful techniques to assess volume responsiveness.\n\n\nDATA SOURCES\n: MEDLINE, EMBASE, Cochrane Register of Controlled Trials and citation review of relevant primary and review articles.\n\n\nSTUDY SELECTION\n: Clinical studies that evaluated the association between stroke volume variation, pulse pressure variation, and/or stroke volume variation and the change in stroke volume/cardiac index after a fluid or positive end-expiratory pressure challenge.\n\n\nDATA EXTRACTION AND SYNTHESIS\n: Data were abstracted on study design, study size, study setting, patient population, and the correlation coefficient and/or receiver operating characteristic between the baseline systolic pressure variation, stroke volume variation, and/or pulse pressure variation and the change in stroke index/cardiac index after a fluid challenge. When reported, the receiver operating characteristic of the central venous pressure, global end-diastolic volume index, and left ventricular end-diastolic area index were also recorded. Meta-analytic techniques were used to summarize the data. Twenty-nine studies (which enrolled 685 patients) met our inclusion criteria. Overall, 56% of patients responded to a fluid challenge. The pooled correlation coefficients between the baseline pulse pressure variation, stroke volume variation, systolic pressure variation, and the change in stroke/cardiac index were 0.78, 0.72, and 0.72, respectively. The area under the receiver operating characteristic curves were 0.94, 0.84, and 0.86, respectively, compared with 0.55 for the central venous pressure, 0.56 for the global end-diastolic volume index, and 0.64 for the left ventricular end-diastolic area index. The mean threshold values were 12.5 +/- 1.6% for the pulse pressure variation and 11.6 +/- 1.9% for the stroke volume variation. The sensitivity, specificity, and diagnostic odds ratio were 0.89, 0.88, and 59.86 for the pulse pressure variation and 0.82, 0.86, and 27.34 for the stroke volume variation, respectively.\n\n\nCONCLUSIONS\n: Dynamic changes of arterial waveform-derived variables during mechanical ventilation are highly accurate in predicting volume responsiveness in critically ill patients with an accuracy greater than that of traditional static indices of volume responsiveness. This technique, however, is limited to patients who receive controlled ventilation and who are not breathing spontaneously.",
"title": ""
},
{
"docid": "37e561a8dd29299dee5de2cb7781c5a3",
"text": "The management of knowledge and experience are key means by which systematic software development and process improvement occur. Within the domain of software engineering (SE), quality continues to remain an issue of concern. Although remedies such as fourth generation programming languages, structured techniques and object-oriented technology have been promoted, a \"silver bullet\" has yet to be found. Knowledge management (KM) gives organisations the opportunity to appreciate the challenges and complexities inherent in software development. We report on two case studies that investigate KM in SE at two IT organisations. Structured interviews were conducted, with the assistance of a qualitative questionnaire. The results were used to describe current practices for KM in SE, to investigate the nature of KM activities in these organisations, and to explain the impact of leadership, technology, culture and measurement as enablers of the KM process for SE.",
"title": ""
},
{
"docid": "a1fef597312118f53e6b1468084a9300",
"text": "The design of highly emissive and stable blue emitters for organic light emitting diodes (OLEDs) is still a challenge, justifying the intense research activity of the scientific community in this field. Recently, a great deal of interest has been devoted to the elaboration of emitters exhibiting a thermally activated delayed fluorescence (TADF). By a specific molecular design consisting into a minimal overlap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) due to a spatial separation of the electron-donating and the electron-releasing parts, luminescent materials exhibiting small S1-T1 energy splitting could be obtained, enabling to thermally upconvert the electrons from the triplet to the singlet excited states by reverse intersystem crossing (RISC). By harvesting both singlet and triplet excitons for light emission, OLEDs competing and sometimes overcoming the performance of phosphorescence-based OLEDs could be fabricated, justifying the interest for this new family of materials massively popularized by Chihaya Adachi since 2012. In this review, we proposed to focus on the recent advances in the molecular design of blue TADF emitters for OLEDs during the last few years.",
"title": ""
},
{
"docid": "979b0feaadefcf8494af4667cfe9a1ff",
"text": "We study fairness within the stochastic,multi-armed bandit (MAB) decision making framework. We adapt the fairness framework of “treating similar individuals similarly” [5] to this seing. Here, an ‘individual’ corresponds to an arm and two arms are ‘similar’ if they have a similar quality distribution. First, we adopt a smoothness constraint that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we dene the fairness regret, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on ompson sampling satises smooth fairness for total variation distance, and give an Õ((kT )2/3) bound on fairness regret. is complements prior work [12], which protects an on-average beer arm from being less favored. We also explain how to extend our algorithm to the dueling bandit seing. ACM Reference format: Yang Liu, Goran Radanovic, Christos Dimitrakakis, DebmalyaMandal, andDavid C. Parkes. 2017. Calibrated Fairness in Bandits. In Proceedings of FAT-ML, Calibrated Fairness in Bandits, September 2017 (FAT-ML17), 7 pages. DOI: 10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "e83622a6c195b63f9a20306af8aade18",
"text": "BACKGROUND\nPelvic floor muscle training is the most commonly recommended physical therapy treatment for women with stress leakage of urine. It is also used in the treatment of women with mixed incontinence, and less commonly for urge incontinence. Adjuncts, such as biofeedback or electrical stimulation, are also commonly used with pelvic floor muscle training. The content of pelvic floor muscle training programmes is highly variable.\n\n\nOBJECTIVES\nTo determine the effects of pelvic floor muscle training for women with symptoms or urodynamic diagnoses of stress, urge and mixed incontinence, in comparison to no treatment or other treatment options.\n\n\nSEARCH STRATEGY\nSearch strategy: We searched the Cochrane Incontinence Group trials register (May 2000), Medline (1980 to 1998), Embase (1980 to 1998), the database of the Dutch National Institute of Allied Health Professions (to 1998), the database of the Cochrane Rehabilitation and Related Therapies Field (to 1998), Physiotherapy Index (to 1998) and the reference lists of relevant articles. We handsearched the proceedings of the International Continence Society (1980 to 2000). We contacted investigators in the field to locate studies. Date of the most recent searches: May 2000.\n\n\nSELECTION CRITERIA\nRandomised trials in women with symptoms or urodynamic diagnoses of stress, urge or mixed incontinence that included pelvic floor muscle training in at least one arm of the trial.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers assessed all trials for inclusion/exclusion and methodological quality. Data were extracted by the lead reviewer onto a standard form and cross checked by another. Disagreements were resolved by discussion. Data were processed as described in the Cochrane Handbook. Sensitivity analysis on the basis of diagnosis was planned and undertaken where appropriate.\n\n\nMAIN RESULTS\nForty-three trials met the inclusion criteria. The primary or only reference for 15 of these was a conference abstract. The pelvic floor muscle training programs, and comparison interventions, varied markedly. Outcome measures differed between trials, and methods of data reporting varied, making the data difficult to combine. Many of the trials were small. Allocation concealment was adequate in five trials, and nine trials used assessors masked to group allocation. Thirteen trials reported that there were no losses to follow up, seven trials had dropout rates of less than 10%, but in the remaining trials the proportion of dropouts ranged from 12% to 41%. Pelvic floor muscle training was better than no treatment or placebo treatments for women with stress or mixed incontinence. 'Intensive' appeared to be better than 'standard' pelvic floor muscle training. PFMT may be more effective than some types of electrical stimulation but there were problems in combining the data from these trials. There is insufficient evidence to determine if pelvic floor muscle training is better or worse than other treatments. The effect of adding pelvic floor muscle training to other treatments (e.g. electrical stimulation, behavioural training) is not clear due to the limited amount of evidence available. Evidence of the effect of adding other adjunctive treatments to PFMT (e.g. vaginal cones, intravaginal resistance) is equally limited. The effectiveness of biofeedback assisted PFMT is not clear, but on the basis of the evidence available there did not appear to be any benefit over PFMT alone at post treatment assessment. Long-term outcomes of pelvic floor muscle training are unclear. Side effects of pelvic floor muscle training were uncommon and reversible. A number of the formal comparisons should be viewed with caution due to statistical heterogeneity, lack of statistical independence, and the possibility of spurious confidence intervals in some instances.\n\n\nREVIEWER'S CONCLUSIONS\nPelvic floor muscle training appeared to be an effective treatment for adult women with stress or mixed incontinence. Pelvic floor muscle training was better than no treatment or placebo treatments. The limitations of the evidence available mean that is difficult to judge if pelvic floor muscle training was better or worse than other treatments. Most trials to date have studied the effect of treatment in younger, premenopausal women. The role of pelvic floor muscle training for women with urge incontinence alone remains unclear. Many of the trials were small with poor reporting of allocation concealment and masking of outcome assessors. In addition there was a lack of consistency in the choice and reporting of outcome measures that made data difficult to combine. Methodological problems limit the confidence that can be placed in the findings of the review. Further, large, high quality trials are necessary.",
"title": ""
},
{
"docid": "3202cd03c9af446bd6bc2ca0b384c2ac",
"text": "ABSTRACT\nSurgical correction for nonsyndromic craniosynostosis has continued to evolve over the last century. The criterion standard has remained open correction of the cranial deformities, and many techniques have been described that yield satisfactory results. However, technology has allowed for minimally invasive techniques to be developed with the aid of endoscopic visualization. With proper patient selection and the aid of postoperative helmet therapy, there is increasing evidence that supports these techniques' safety and efficacy. In this article, our purpose was to describe our algorithm for treating nonsyndromic craniosynostosis at Rady Children's Hospital.",
"title": ""
},
{
"docid": "0dac38edf20c2a89a9eb46cd1300162c",
"text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.",
"title": ""
},
{
"docid": "cf4089c8c3b8408e2d2966e3abd8af09",
"text": "The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-ofthe-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91eac59a625914805a22643c6fe79ad1",
"text": "Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes.",
"title": ""
},
{
"docid": "60c8a335245e28f2a9ac24edd73eee5a",
"text": "Papulopustular rosacea (PPR) is a common facial skin disease, characterized by erythema, telangiectasia, papules and pustules. Its physiopathology is still being discussed, but recently several molecular features of its inflammatory process have been identified: an overproduction of Toll-Like receptors 2, of a serine protease, and of abnormal forms of cathelicidin. The two factors which stimulate the Toll-like receptors to induce cathelicidin expression are skin infection and cutaneous barrier disruption: these two conditions are, at least theoretically, fulfilled by Demodex, which is present in high density in PPR and creates epithelial breaches by eating cells. So, the major pathogenic mechanisms of Demodex and its role in PPR are reviewed here in the context of these recent discoveries. In this review, the inflammatory process of PPR appears to be a consequence of the proliferation of Demodex, and strongly supports the hypothesis that: (1) in the first stage a specific (innate or acquired) immune defect against Demodex allows the proliferation of the mite; (2) in the second stage, probably when some mites penetrate into the dermis, the immune system is suddenly stimulated and gives rise to an exaggerated immune response against the Demodex, resulting in the papules and the pustules of the rosacea. In this context, it would be very interesting to study the immune molecular features of this first stage, named \"pityriasis folliculorum\", where the Demodex proliferate profusely with no, or a low immune reaction from the host: this entity appears to be a missing link in the understanding of rosacea.",
"title": ""
},
{
"docid": "06b43bbf61791a76c3455cb4d591d71e",
"text": "We present a feature-based framework that combines spatial feature clustering, guided sampling for pose generation, and model updating for 3D object recognition and pose estimation. Existing methods fails in case of repeated patterns or multiple instances of the same object, as they rely only on feature discriminability for matching and on the estimator capabilities for outlier rejection. We propose to spatially separate the features before matching to create smaller clusters containing the object. Then, hypothesis generation is guided by exploiting cues collected offand on-line, such as feature repeatability, 3D geometric constraints, and feature occurrence frequency. Finally, while previous methods overload the model with synthetic features for wide baseline matching, we claim that continuously updating the model representation is a lighter yet reliable strategy. The evaluation of our algorithm on challenging video sequences shows the improvement provided by our contribution.",
"title": ""
},
{
"docid": "d488d9d754c360efb3910c83e3175756",
"text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.",
"title": ""
},
{
"docid": "20c2aea79b80c93783aa3f82a8aa2625",
"text": "The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.",
"title": ""
},
{
"docid": "009543f9b54e116f379c95fe255e7e03",
"text": "With technology migration into nano and molecular scales several hybrid CMOS/nano logic and memory architectures have been proposed that aim to achieve high device density with low power consumption. The discovery of the memristor has further enabled the realization of denser nanoscale logic and memory systems by facilitating the implementation of multilevel logic. This work describes the design of such a multilevel nonvolatile memristor memory system, and the design constraints imposed in the realization of such a memory. In particular, the limitations on load, bank size, number of bits achievable per device, placed by the required noise margin for accurately reading and writing the data stored in a device are analyzed. Also analyzed are the nondisruptive read and write methodologies for the hybrid multilevel memristor memory to program and read the memristive information without corrupting it. This work showcases two write methodologies that leverage the best traits of memristors when used in either linear (low power) or nonlinear drift (fast speeds) modes. The system can therefore be tailored depending on the required performance parameters of a given application for a fast memory or a slower but very energy-efficient system. We propose for the first time, a hybrid memory that aims to incorporate the area advantage provided by the utilization of multilevel logic and nanoscale memristive devices in conjunction with CMOS for the realization of a high density nonvolatile multilevel memory.",
"title": ""
},
{
"docid": "3436b24142bfce01eadd6f7a1d6f1dd1",
"text": "Partial discharge (PD) detection has been widely applied to high voltage cable systems for several decades. In this paper, three kinds of insulation defects in XLPE cables are designed and tested at step-wise DC voltage. The PD developing progress of each defect cable is divided into two stages based on the severity degree of PDs. Based on the compressed sensing (CS) theory, a novel method used for recognizing PD patterns at DC voltage is proposed. Firstly, both the statistical features of PD repetition rate and the norm characteristics of time domain features are extracted to create a high-dimensional feature space. Then each test sample from the feature space is sparsely represented as linear combinations of training samples, and the sufficiently sparse one is obtained via 1-norm minimization. Finally, the PD pattern can be recognized by minimizing the residuals between the test sample and the recovered one. The experimental data is analyzed by the proposed method, and the results show that the patterns of both PD source and PD stage are recognized precisely, when the combination solution of features and the 1-norm minimization algorithm are determined appropriately.",
"title": ""
},
{
"docid": "7e720290d507c3370fc50782df3e90c4",
"text": "Photobacterium damselae subsp. piscicida is the causative agent of pasteurellosis in wild and farmed marine fish worldwide. Although serologically homogeneous, recent molecular advances have led to the discovery of distinct genetic clades, depending on geographical origin. Further details of the strategies for host colonisation have arisen including information on the role of capsule, susceptibility to oxidative stress, confirmation of intracellular survival in host epithelial cells, and induced apoptosis of host macrophages. This improved understanding has given rise to new ideas and advances in vaccine technologies, which are reviewed in this paper.",
"title": ""
},
{
"docid": "241a1589619c2db686675327cab1e8da",
"text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.",
"title": ""
},
{
"docid": "02f97b35b014a55b4a36e22981877784",
"text": "BACKGROUND\nCough is an extremely common problem in pediatrics, mostly triggered and perpetuated by inflammatory processes or mechanical irritation leading to viscous mucous production and increased sensitivity of the cough receptors. Protecting the mucosa might be very useful in limiting the contact with micro-organisms and irritants thus decreasing the inflammation and mucus production. Natural molecular complexes can act as a mechanical barrier limiting cough stimuli with a non pharmacological approach but with an indirect anti-inflammatory action.\n\n\nOBJECTIVE\nAim of the study was to assess the efficacy of a medical device containing natural functional components in the treatment of cough persisting more than 7 days.\n\n\nMETHODS\nIn this randomized, parallel groups, double-blind vs. placebo study, children with cough persisting more than 7 days were enrolled. The clinical efficacy of the study product was assessed evaluating changes in day- and night-time cough scores after 4 and 8 days (t4 and t8) of product administration.\n\n\nRESULTS\nIn the inter-group analysis, in the study product group compared with the placebo group, a significant difference (t4 study treatment vs. t4 placebo, p = 0.03) was observed at t4 in night-time cough score.Considering the intra-group analysis, only the study product group registered a significant improvement from t0 to t4 in both day-time (t0 vs. t4, p = 0.04) and night-time (t0 vs. t4, p = 0.003) cough scores.A significant difference, considering the study product, was also found in the following intra-group analyses: day-time scores at t4 vs. t8 (p =0.01) and at t0 vs. t8 (p = 0.001); night-time scores at t4 vs. t8 (p = 0.05), and at t0 vs. t8 (p = 0.005). Considering a subgroup of patients with higher cough (≥ 3) scores, 92.9% of them in the study product group improved at t0 vs. t4 day-time.\n\n\nCONCLUSIONS\nGrintuss® pediatric syrup showed to possess an interesting profile of efficacy and safety in the treatment of cough persisting more than 7 days.",
"title": ""
},
{
"docid": "06113aca54d87ade86127f2844df6bfd",
"text": "A growing number of people use social networking sites to foster social relationships among each other. While the advantages of the provided services are obvious, drawbacks on a users' privacy and arising implications are often neglected. In this paper we introduce a novel attack called automated social engineering which illustrates how social networking sites can be used for social engineering. Our approach takes classical social engineering one step further by automating tasks which formerly were very time-intensive. In order to evaluate our proposed attack cycle and our prototypical implementation (ASE bot), we conducted two experiments. Within the first experiment we examine the information gathering capabilities of our bot. The second evaluation of our prototype performs a Turing test. The promising results of the evaluation highlight the possibility to efficiently and effectively perform social engineering attacks by applying automated social engineering bots.",
"title": ""
}
] |
scidocsrr
|
7cba0142816a59dcc680d63002323d0a
|
RFID Transponders
|
[
{
"docid": "e43a39af20f2e905d0bdb306235c622a",
"text": "This paper presents a fully integrated remotely powered and addressable radio frequency identification (RFID) transponder working at 2.45 GHz. The achieved operating range at 4 W effective isotropically radiated power (EIRP) base-station transmit power is 12 m. The integrated circuit (IC) is implemented in a 0.5 /spl mu/m silicon-on-sapphire technology. A state-of-the-art rectifier design achieving 37% of global efficiency is embedded to supply energy to the transponder. The necessary input power to operate the transponder is about 2.7 /spl mu/W. Reader to transponder communication is obtained using on-off keying (OOK) modulation while transponder to reader communication is ensured using the amplitude shift keying (ASK) backscattering modulation technique. Inductive matching between the antenna and the transponder IC is used to further optimize the operating range.",
"title": ""
},
{
"docid": "3c7154162996f3fecbedd2aa79555ca4",
"text": "This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-/spl mu/m 1M/2P N-epi BiCMOS, and the AMI 1.5-/spl mu/m 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm/sup 2/ in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.",
"title": ""
}
] |
[
{
"docid": "5509b4a8e0a4b98795c2fc561f18d9c4",
"text": "A low-power variable-gain amplifier (VGA) based on transconductance (gm)-ratioed amplification is analyzed and designed with improved linearity. The VGA has the merits of continuous gain tuning, low power consumption and small chip area. However, the linearity performance of the gm-ratioed amplifier is usually poor. We analyze distortion in gm-ratioed amplifiers and propose to improve the output linearity by applying load degeneration technique. It is found that theoretically the output linearity can be improved by 8.5 dB at the same power consumption. We also analyze gain, bandwidth and noise performance of the gm-ratioed amplifiers. Two VGAs based on gm-ratioed amplification are designed and fabricated in a 0.18-μm CMOS process-one with load degeneration only and the other with both input and load degeneration. The VGA with load degeneration only achieves gain of -20 to 41 dB, bandwidth of 121 to 211 MHz, and input and output P1dB up to - 17 dBm and 0.65 dBm, respectively. The VGA with both input and load degeneration achieves gain of -37 to 28 dB, bandwidth of 76 to 809 MHz, and input and output P1dB up to - 2.63 dBm and 2.29 dBm, respectively. The two VGAs consume a similar amount of power, which is about 3 to 5 mW from a 1.8-V supply. For the same bias condition, the proposed load degeneration improves the output linearity by more than 15 dB.",
"title": ""
},
{
"docid": "8db733045dd0689e21f35035f4545eff",
"text": "An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.",
"title": ""
},
{
"docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11",
"text": "e eective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. erefore, it is necessary to identify the dierence between automatically learned features by deep IR models and hand-craed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the dierences between these deep IR models. is paper aims to conduct a deep investigation on deep IR models. Specically, we conduct an extensive empirical study on two dierent datasets, including Robust and LETOR4.0. We rst compared the automatically learned features and handcraed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-craed features. erefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two dierent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on dierent categories of words, including topic-related words and query-related words.",
"title": ""
},
{
"docid": "72fb6765b43f47abc129c073bfdcdba5",
"text": "The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR—and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: Following compliance checklists and codes of conduct; Supporting risk assessments; Complying with the new regulations regarding technologies that perform automatic profiling; Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.",
"title": ""
},
{
"docid": "00f88387c8539fcbed2f6ec4f953438d",
"text": "We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.\n On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.",
"title": ""
},
{
"docid": "b1e1d8dcd0fcd2a88b29f31c60b11a11",
"text": "Ergativity refers to patterning in a language whereby the subject of a transitive clause behaves differently to the subject of an intransitive clause, which behaves like the object of a transitive clause. Ergativity can be manifested in morphology, lexicon, syntax, and discourse organisation. This article overviews what is known about ergativity in the world’s languages, with a particular focus on one type of morphological ergativity, namely in case-marking. While languages are rarely entirely consistent in ergative case-marking, and the inconsistencies vary considerably across languages, they are nevertheless not random. Thus splits in casemarking, in which ergative patterning is restricted to certain domains, follow (with few exceptions) universal tendencies. So also are there striking cross-linguistic commonalities among systems in which ergative case-marking is optional, although systematic investigation of this domain is quite recent. Recent work on the diachrony of ergative systems and case-markers is overviewed, and issues for further research are identified.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5d2e0bd9c691af163e0e2221b9406c82",
"text": "Many high-throughput experimental technologies have been developed to assess the effects of large numbers of mutations (variation) on phenotypes. However, designing functional assays for these methods is challenging, and systematic testing of all combinations is impossible, so robust methods to predict the effects of genetic variation are needed. Most prediction methods exploit evolutionary sequence conservation but do not consider the interdependencies of residues or bases. We present EVmutation, an unsupervised statistical method for predicting the effects of mutations that explicitly captures residue dependencies between positions. We validate EVmutation by comparing its predictions with outcomes of high-throughput mutagenesis experiments and measurements of human disease mutations and show that it outperforms methods that do not account for epistasis. EVmutation can be used to assess the quantitative effects of mutations in genes of any organism. We provide pre-computed predictions for ∼7,000 human proteins at http://evmutation.org/.",
"title": ""
},
{
"docid": "be32478371831fd1aa3c8f9d53fa400f",
"text": "Railway transport is one of the most important mass transportation media in the worldwide. With the development of trains speed, safety and comfort levels of railways is getting more importance day by day. Besides high level of security requirement, detection of anomaly for rail and road shall be early identified for decreasing operation and maintenance expenditures. The pantograph-catenary system has an important role for collecting the current in electrical railways. The problem occurred in this system will affect the current collection performance of electrified trains. In this paper, a new image processing based technique is proposed to detect the arcing faults occurred between catenary and pantograph contact. The proposed method takes one frame from the digital camera and then the edge detection algorithm extracts the edges of pantograph. The arcing between contact wire and pantograph is detected by examining the position of contact wire of pantograph's edge.",
"title": ""
},
{
"docid": "10117f9d3b8b4720ea37cbf36073c130",
"text": "This biomechanical study was performed to measure tissue pressure in the infrapatellar fat pad and the volume changes of the anterior knee compartment during knee flexion–extension motion. Knee motion from 120° of flexion to full extension was simulated on ten fresh frozen human knee specimens (six from males, four from females, average age 44 years) using a hydraulic kinematic simulator (30, 40, and 50 Nm extension moment). Infrapatellar tissue pressure was measured using a closed cell sensor. Infrapatellar volume change in the anterior knee compartment was evaluated subsequent to removal of the fat pad using a water-filled bladder. We found a significant increase of the infrapatellar tissue pressure during knee flexion, at flexion angles of <20° and >100°. The average tissue pressure ranged from 343 (±223) mbar at 0° to 60 (±64) mbar at 60° of flexion. The smallest volume in the anterior knee compartment was measured at full extension and 120° of flexion, whereas the maximum volume was observed at 50° of flexion. In conclusion, the data suggest a biomechanical function of the infrapatellar fat pad at flexion angles of <20° and >100°, which suggests a role of the infrapatellar fat pad in stabilizing the patella in the extremes of knee motion.",
"title": ""
},
{
"docid": "671952f18fb9041e7335f205666bf1f5",
"text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.",
"title": ""
},
{
"docid": "c7e22f53b86959c1bad9cbf405f6bd01",
"text": "The use of an electromechanical valve actuator (EMVA) formed by two magnets and two balanced springs is a promising tool to implement innovative engine management strategies. This actuator needs to be properly controlled to reduce impact velocities during engine valve operations, but the use of a position sensor for each valve is not possible for cost reasons. It is therefore essential to find sensorless solutions based on increasingly predictive models of such a mechatronic actuator. To address this task, in this paper, we present an in-depth lumped parameter model of an EMVA based on a hybrid analytical-finite-element method (FEM) approach. The idea is to develop a model of EMVA embedding the well-known predictive behavior of FEM models. All FEM data are then fitted to a smooth curve that renders unknown magnetic quantities in analytical form. In this regard, we select a single-wise function that is able to describe global magnetic quantities as the flux linkage and force both for linear and saturation working regions of the materials. The model intrinsically describes all mutual effects between two magnets. The goodness of the dynamic behavior of the model is finally tested on a series of transient FEM simulations of the actuator in different working conditions.",
"title": ""
},
{
"docid": "f0532446a19fb2fa28a7a01cddca7e37",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "ee9cb495280dc6e252db80c23f2f8c2b",
"text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.",
"title": ""
},
{
"docid": "4a2e268b26ecf09d990de8cb3579091f",
"text": "[PDF] [Full Text] [Abstract] , November 1, 2007; 103 (5): 1815-1823. J Appl Physiol M. J. Lyon, L. M. Steer and L. T. Malmgren human posterior cricoarytenoid muscle Stereological estimates indicate that aging does not alter the capillary length density in the [PDF] [Full Text] [Abstract] , July 1, 2008; 295 (1): C288-C292. Am J Physiol Cell Physiol T. Akimoto, P. Li and Z. Yan exercise by in vivo imaging Functional interaction of regulatory factors with the Pgc-1{alpha} promoter in response to [PDF] [Full Text] [Abstract] , December 15, 2008; 586 (24): 6021-6035. J. Physiol. K. A. Zwetsloot, L. M. Westerkamp, B. F. Holmes and T. P. Gavin necessary for the angiogenic response to exercise AMPK regulates basal skeletal muscle capillarization and VEGF expression, but is not [PDF] [Full Text] [Abstract] , May 1, 2009; 106 (5): 1660-1667. J Appl Physiol L. E. Wong, T. Garland Jr., S. L. Rowan and R. T. Hepple mice Anatomic capillarization is elevated in the medial gastrocnemius muscle of mighty mini [PDF] [Full Text] [Abstract] , June 1, 2009; 94 (6): 749-760. Exp Physiol M. H. Malek and I. M. Olfert exercise capacity in mice Global deletion of thrombospondin-1 increases cardiac and skeletal muscle capillarity and",
"title": ""
},
{
"docid": "748926afd2efcae529a58fbfa3996884",
"text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education",
"title": ""
},
{
"docid": "4078baf8302faafbbf22865152204b9a",
"text": "In emergency management for mass gatherings, the knowledge about crowd types can highly assist with providing timely response and effective resource allocation. Crowd monitoring can be achieved using computer vision based approaches and sensory data analysis. The emergence of social media platforms presents an opportunity to capture valuable information about how people feel and think. However, the literature shows that there are a limited number of studies that use social media in crowd monitoring and/or incorporate a unified crowd model for consistency and interoperability. This paper presents a novel framework for crowd monitoring using social media. It includes a standard crowd model to represent different types of crowds. The proposed framework considers the effect of emotion on crowd behaviour and uses the emotion analysis of social media to identify the crowd types in an event. An experiment using historical data to validate our framework is described.",
"title": ""
},
{
"docid": "be7a33cc59e8fb297c994d046c6874d9",
"text": "Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is one of the powerful ways to reduce the scan time of MR imaging with performance guarantee. However, the computational costs are usually expensive. This paper aims to propose a computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data. Theory: Based on the topological analysis, we show that the data manifold of the aliasing artifact is easier to learn from a uniform subsampling pattern with additional low-frequency k-space data. Thus, we develop deep aliasing artifact learning networks for the magnitude and phase images to estimate and remove the aliasing artifacts from highly accelerated MR acquisition. Methods: The aliasing artifacts are directly estimated from the distorted magnitude and phase images reconstructed from subsampled k-space data so that we can get an aliasing-free images by subtracting the estimated aliasing artifact from corrupted inputs. Moreover, to deal with the globally distributed aliasing artifact, we develop a multi-scale deep neural network with a large receptive field. Results: The experimental results confirm that the proposed deep artifact learning network effectively estimates and removes the aliasing artifacts. Compared to existing CS methods from single and multi-coli data, the proposed network shows minimal errors by removing the coherent aliasing artifacts. Furthermore, the computational time is by order of magnitude faster. Conclusion: As the proposed deep artifact learning network immediately generates accurate reconstruction, it has great potential for clinical applications.",
"title": ""
},
{
"docid": "ffbe9764c410651e17ed0f63fc68c743",
"text": "Antibiotics are among the most successful group of pharmaceuticals used for human and veterinary therapy. However, large amounts of antibiotics are released into municipal wastewater due to incomplete metabolism in humans or due to disposal of unused antibiotics, which finally find their ways into different natural environmental compartments. The emergence and rapid spread of antibiotic resistant bacteria (ARB) has led to an increasing concern about the potential environmental and public health risks. ARB and antibiotic resistant genes (ARGs) have been detected extensively in wastewater samples. Available data show significantly higher proportion of antibiotic resistant bacteria contained in raw and treated wastewater relative to surface water. According to these studies, the conditions in wastewater treatment plants (WWTPs) are favourable for the proliferation of ARB. Moreover, another concern with regards to the presence of ARB and ARGs is their effective removal from sewage. This review gives an overview of the available data on the occurrence of ARB and ARGs and their fate in WWTPs, on the biological methods dealing with the detection of bacterial populations and their resistance genes, and highlights areas in need for further research studies.",
"title": ""
},
{
"docid": "5df6731864165c2d6bf3b759b889553c",
"text": "The growing power of bloggers to influence their connected network has emerged as a new communication venue for brands. This study elaborates upon the role of bloggers in brand communication, and reveals how brands can engage with bloggers, currently considered as online opinion leaders, from the perspective of the two-step flow theory. Following clarification of the aims of the study, we report on in-depth interviews with 17 brand and digital agency representatives, selected because they regard communication with bloggers as an important strategy in increasing the influence of their brands among online communities. This exploratory study reflects current blogger communication implementations, and concludes with a discussion of seven major issues arising from the literature review and interviews (definition of bloggers, blogger selection criteria, digital integration, power of bloggers, long-term relationship building with bloggers, measurement, and budgetary issues in blogger communication). These areas represent relatively unexplored areas of blogger engagement from both an academic and managerial perspective. Based on the findings of the interviews, we propose a model which traces the influencer role of bloggers from the two-step flow theory perspective. This model is named as the brand communication through digital influencers model. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
4ae94e23fe7257f966cd85337a81a148
|
A Market in Your Social Network: The Effects of Extrinsic Rewards on Friendsourcing and Relationships
|
[
{
"docid": "1ee74e505f5efc99331d5b63565882cf",
"text": "Consumers shopping in \"brick-and-mortar\" (non-virtual) stores often use their mobile phones to consult with others about potential purchases. Via a survey (n = 200), we detail current practices in seeking remote shopping advice. We then consider how emerging social platforms, such as social networking sites and crowd labor markets, could offer rich next-generation remote shopping advice experiences. We conducted a field experiment in which shoppers shared photographs of potential purchases via MMS, Facebook, and Mechanical Turk. Paid crowdsourcing, in particular, proved surprisingly useful and influential as a means of augmenting in-store shopping. Based on our findings, we offer design suggestions for next-generation remote shopping advice systems.",
"title": ""
},
{
"docid": "d06a7c8379ba991385af5dc986537360",
"text": "Though social network site use is often treated as a monolithic activity, in which all time is equally social and its impact the same for all users, we examine how Facebook affects social capital depending upon: (1) types of site activities, contrasting one-on-one communication, broadcasts to wider audiences, and passive consumption of social news, and (2) individual differences among users, including social communication skill and self-esteem. Longitudinal surveys matched to server logs from 415 Facebook users reveal that receiving messages from friends is associated with increases in bridging social capital, but that other uses are not. However, using the site to passively consume news assists those with lower social fluency draw value from their connections. The results inform site designers seeking to increase social connectedness and the value of those connections.",
"title": ""
}
] |
[
{
"docid": "f35dc45e28f2483d5ac66271590b365d",
"text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.",
"title": ""
},
{
"docid": "26140dbe32672dc138c46e7fd6f39b1a",
"text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.",
"title": ""
},
{
"docid": "1c4930b976f35488e9df6ead74358878",
"text": "The covalently modified ureido-conjugated chitosan/TPP multifunctional nanoparticles have been developed as targeted nanomedicine delivery system for eradication of Helicobacter pylori. H. pylori can specifically express the urea transport protein on its membrane to transport urea into cytoplasm for urease to produce ammonia, which protects the bacterium in the acid milieu of stomach. The clinical applicability of topical antimicrobial agent is needed to eradicate H. pylori in the infected fundal area. In this study, we designed and synthesized two ureido-conjugated chitosan derivatives UCCs-1 and UCCs-2 for preparation of multifunctional nanoparticles. The process was optimized in order to prepare UCCs/TPP nanoparticles for encapsulation of amoxicillin. The results showed that the amoxicillin-UCCs/TPP nanoparticles exhibited favorable pH-sensitive characteristics, which could procrastinate the release of amoxicillin at gastric acids and enable the drug to deliver and target to H. pylori at its survival region effectively. Compared with unmodified amoxicillin-chitosan/TPP nanoparticles, a more specific and effective H. pylori growth inhibition was observed for amoxicillin-UCCs/TPP nanoparticles. Drug uptake analysis tested by flow cytometry and confocal laser scanning microscopy verified that the uptake of FITC-UCCs-2/TPP nanoparticles was associated with urea transport protein on the membrane of H. pylori and reduced with the addition of urea as competitive transport substrate. These findings suggest that the multifunctional amoxicillin-loaded nanoparticles have great potential for effective therapy of H. pylori infection. They may also serve as pharmacologically effective nanocarriers for oral targeted delivery of other therapeutic drugs to treat H. pylori.",
"title": ""
},
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "2bf70c7899f6a0263122bd3492b95590",
"text": "We present a hierarchical classification model that allows rare objects to borrow statistical strength from related objects that have many training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. Our experimental results on the challenging object localization and detection task demonstrate that the proposed model substantially improves the accuracy of the standard single object detectors that ignore hierarchical structure altogether.",
"title": ""
},
{
"docid": "ff685a2272377e3c8b3596ed92eaccd8",
"text": "The goal of control law design for haptic displays is to provide a safe and stable user interface while maximizing the operator’s sense of kinesthetic immersion in a virtual environment. This paper outlines a control design approach which guarantees the stability of a haptic interface when coupled to a broad class of human operators and virtual environments. Two-port absolute stability criteria are used to develop explicit control law design bounds for two different haptic display implementations: impedance display and admittance display. The strengths and weaknesses of each approach are illustrated through numerical and experimental results for a three degree-of-freedom device. The example highlights the ability of the proposed design procedure to handle some of the more difficult problems in control law synthesis for haptics, including structural flexibility and non-collocation of sensors and actuators. The authors are with the Department of Electrical Engineering University of Washington, Box 352500 Seattle, WA 98195-2500 * corresponding author submitted to IEEE Transactions on Control System Technology 9-7-99 2",
"title": ""
},
{
"docid": "9514201894e516d888c593dbade709bc",
"text": "Code obfuscation is a technique to transform a program into an equivalent one that is harder to be reverse engineered and understood. On Android, well-known obfuscation techniques are shrinking, optimization, renaming, string encryption, control flow transformation, etc. On the other hand, adversaries may also maliciously use obfuscation techniques to hide pirated or stolen software. If pirated software were obfuscated, it would be difficult to detect software theft. To detect illegal software transformed by code obfuscation, one possible approach is to measure software similarity between original and obfuscated programs and determine whether the obfuscated version is an illegal copy of the original version. In this paper, we analyze empirically the effects of code obfuscation on Android app similarity analysis. The empirical measurements were done on five different Android apps with DashO obfuscator. Experimental results show that similarity measures at bytecode level are more effective than those at source code level to analyze software similarity.",
"title": ""
},
{
"docid": "367ba3305217805d6068d6117a693a11",
"text": "Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortized variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables.",
"title": ""
},
{
"docid": "1ddbe5990a1fc4fe22a9788c77307a9f",
"text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.",
"title": ""
},
{
"docid": "f23ce789f76fe15e78a734caa5d2bc53",
"text": "The importance of location based services (LBS) is steadily increasing with progressive automation and interconnectedness of systems and processes. However, a comprehensive localization and navigation solution is still part of research. Especially for dynamic and harsh indoor environments, accurate and affordable localization and navigation remains a challenge. In this paper, we present a hybrid localization system providing position information and navigation aid to pedestrian in dynamic indoor environments, like construction sites, by combining an IMU and a spatial non-uniform UWB-network. The key contribution of this paper is a hybrid localization concept and experimental results, demonstrating in an application near scenario the enhancements introduced by the combination of an inertial navigation system (INS) and a spatial non-uniform UWB-network.",
"title": ""
},
{
"docid": "b9652cf6647d9c7c1f91a345021731db",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "f2e30bbb95bc28a051128f92ee218156",
"text": "Silent sinus syndrome is a dysfunction of the maxillary sinus that induces a progressive and asymptomatic enophthalmos with prominent deep superior sulcus deformity. Two cases of silent sinus syndrome are reported, and the simultaneous management of both enophthalmos and superior sulcus deformity caused by this syndrome is discussed. The patients underwent surgical endoscopic maxillary meatotomy and transconjunctival subperiosteal implantation of porous polyethylene sheets. The treatment successfully corrected both the enophthalmos and the upper eyelid sulcus deformity. However, small degrees of vertical eye dystopia were observed. Silent sinus syndrome is a rare cause of enophthalmos and superior sulcus deformity. Orbital floor implants can be used to increase the volume of the orbital contents, but vertical eye dystopia is likely to be induced if this method of treatment is the only option chosen.",
"title": ""
},
{
"docid": "8eb84b8d29c8f9b71c92696508c9c580",
"text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.",
"title": ""
},
{
"docid": "0dc1119bf47ffa6d032c464a54d5d173",
"text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.",
"title": ""
},
{
"docid": "15c3ddb9c01d114ab7d09f010195465b",
"text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.",
"title": ""
},
{
"docid": "c6035abd67504564fbf4b8c6015beb2e",
"text": "Intermediaries can choose between functioning as a marketplace (on which suppliers sell their products directly to buyers) or as a reseller (purchasing products from suppliers and selling them to buyers). We model this as a decision between whether control rights over a non-contractible decision variable (the choice of some marketing activity) are better held by suppliers (the marketplacemode) or by the intermediary (the reseller-mode). Whether the marketplace or the reseller mode is preferred depends on whether independent suppliers or the intermediary have more important information relevant to the optimal tailoring of marketing activities for each specific product. We show that this tradeoff is shifted towards the reseller-mode when marketing activities create spillovers across products and when network effects lead to unfavorable expectations about supplier participation. If the reseller has a variable cost advantage (respectively, disadvantage) relative to the marketplace then the tradeoff is shifted towards the marketplace for long-tail (respectively, shorttail) products. We thus provide a theory of which products an intermediary should offer in each mode. We also provide some empirical evidence that supports our main results. JEL classification: D4, L1, L5",
"title": ""
},
{
"docid": "71f7f072ca5356927aab0112daf2b4f2",
"text": "In electrical power engineering, reinforcement learning algorithms can be used to model the strategies of electricity market participants. However, traditional value function based reinforcement learning algorithms suffer from convergence issues when used with value function approximators. Function approximation is required in this domain to capture the characteristics of the complex and continuous multivariate problem space. The contribution of this paper is the comparison of policy gradient reinforcement learning methods, using artificial neural networks for policy function approximation, with traditional value function based methods in simulations of electricity trade. The methods are compared using an AC optimal power flow based power exchange auction market model and a reference electric power system model.",
"title": ""
},
{
"docid": "7b755f9b49187e9a77efc4a2327c80ad",
"text": "In this paper, each document is represented by a weighted graph called a text relationship map. In the graph, each node represents a vector of nouns in a sentence, an undirected link connects two nodes if two sentences are semantically related, and a weight on the link is a value of the similarity between a pair of sentences. The vector similarity can be computed as the inner product between corresponding vector elements. The similarity is based on the word overlap between the corresponding sentences. The importance of a node on the map, called an aggregate similarity, is defined as the sum of weights on the links connecting it to other nodes on the map. In this paper, we present a Korean text summarization system using the aggregate similarity. To evaluate our system, we used two test collections: one collection (PAPER-InCon) consists of 100 papers in the domain of computer science; the other collection (NEWS) is composed of 105 articles in the newspapers. Under the compression rate of 20%, we achieved the recall of 46.6% (PAPER-InCon) and 30.5% (NEWS), and the precision of 76.9% (PAPER-InCon) and 42.3% (NEWS). Experiments show that our system outperforms two commercial systems.",
"title": ""
},
{
"docid": "7e6b6f603f18a60b50ac09d7ab8a3fc9",
"text": "We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time. The model represents words and contexts by latent trajectories in an embedding space. At each moment in time, the embedding vectors are inferred from a probabilistic version of word2vec (Mikolov et al., 2013b). These embedding vectors are connected in time through a latent diffusion process. We describe two scalable variational inference algorithms—skipgram smoothing and skip-gram filtering—that allow us to train the model jointly over all times; thus learning on all data while simultaneously allowing word and context vectors to drift. Experimental results on three different corpora demonstrate that our dynamic model infers word embedding trajectories that are more interpretable and lead to higher predictive likelihoods than competing methods that are based on static models trained separately on time slices.",
"title": ""
}
] |
scidocsrr
|
cb1efc51cefd096489c5bf371add38d2
|
Maximizing Network Topology Lifetime Using Mobile Node Rotation
|
[
{
"docid": "8ba2b376995e3a6a02720a73012d590b",
"text": "This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified. We extend LEACH’s stochastic clusterhead selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30 % can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).",
"title": ""
}
] |
[
{
"docid": "376f28143deecc7b95fe45d54dd16bb6",
"text": "We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule malignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machinelearning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "32874ff6ff0a4556950281fb300198ed",
"text": "In the multi-armed bandit problem, a gambler must decide which arm ofKnon-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate",
"title": ""
},
{
"docid": "9e3de4720dade2bb73d78502d7cccc8b",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8e3b1f49ca8a5afe20a9b66e0088a56a",
"text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.",
"title": ""
},
{
"docid": "305a6b7cfcc560e1356fa7a44fee8de2",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower specific on-resistance (RSP) thus forcing designers to push the envelope of increasing the electric field stress on the shielding oxide, reducing the cell pitch, and increasing the epitaxial (epi) drift doping to reduce on resistance. In doing so, time dependant avalanche instabilities have become a concern for oxide charge balanced power MOSFETs. Avalanche instabilities can initiate in the active cell and/or the termination structures. These instabilities cause the avalanche breakdown to increase and/or decrease with increasing time in avalanche. They become a reliability risk when the drain to source breakdown voltage (BVdss) degrades below the operating voltage of the application circuit. This paper will explain a mechanism for these avalanche instabilities and propose an optimum design for the charge balance region. TCAD simulation was employed to give insight to the mechanism. Finally, measured data will be presented to substantiate the theory.",
"title": ""
},
{
"docid": "2dc69fff31223cd46a0fed60264b2de1",
"text": "The authors offer a framework for conceptualizing collective identity that aims to clarify and make distinctions among dimensions of identification that have not always been clearly articulated. Elements of collective identification included in this framework are self-categorization, evaluation, importance, attachment and sense of interdependence, social embeddedness, behavioral involvement, and content and meaning. For each element, the authors take note of different labels that have been used to identify what appear to be conceptually equivalent constructs, provide examples of studies that illustrate the concept, and suggest measurement approaches. Further, they discuss the potential links between elements and outcomes and how context moderates these relationships. The authors illustrate the utility of the multidimensional organizing framework by analyzing the different configuration of elements in 4 major theories of identification.",
"title": ""
},
{
"docid": "9aaae1995134469ffddea73baa7b911d",
"text": "We present probabilistic neural programs, a framework for program induction that permits flexible specification of both a computational model and inference algorithm while simultaneously enabling the use of deep neural networks. Probabilistic neural programs combine a computation graph for specifying a neural network with an operator for weighted nondeterministic choice. Thus, a program describes both a collection of decisions as well as the neural network architecture used to make each one. We evaluate our approach on a challenging diagram question answering task where probabilistic neural programs correctly execute nearly twice as many programs as a baseline model.",
"title": ""
},
{
"docid": "49fe73e28714721e6dc64a3bbeadecc5",
"text": "Fingerprint is the most popular biometric trait due to the perceived uniqueness and persistence of friction ridge pattern on human fingers [1]. Following the introduction of iPhone 5S with Touch ID fingerprint sensor in September 2013, most of the mobile phones, such as iPhone 5s/6/6+, Samsung Galaxy S5/S6, HTC One Max, Huawei Honor 7, Meizu MX4 Pro and others, now come with embedded fingerprint sensors for phone unlock. It has been forecasted that 50% of smartphones sold by 2019 will have an embedded fingerprint sensor [2]. With the introduction of Apple Pay, Samsung Pay and Android Pay, fingerprint recognition on mobile devices is leveraged for more than just for device unlock; it can also be used for secure mobile payment and other transactions.",
"title": ""
},
{
"docid": "cb33570878c6c66601fb0c73b148a6f3",
"text": "Für die automatisierte Bewertung von Lösungen zu Programmieraufgaben wurde mittlerweile eine Vielzah an Grader-Programmen zu unterschiedlichen Programmiersprachen entwickelt. U m Lernenden wie Lehrenden Zugang zur möglichst vielen Gradern über das gewohn te LMS zu ermöglichen wird das Konzept einer generischen Web-Serviceschni ttstelle (Grappa) vorgestellt, welches im Kontext einer Lehrveranstaltung evaluier t wurde.",
"title": ""
},
{
"docid": "5428ed5b458b8bae73d58c8069ad3cfd",
"text": "Software-Defined Radio (SDR) is a technique using software to make the radio functions hardware independent. SDR is starting to be the basis of advanced wireless communication systems such as Joint Tactical Radio System (JTRS). More interestingly, the adoption of SDR technology by JTRS program is followed by military satellite communications programs. In the development of the SDR implementation, GNU Radio emerged as an open source tool that provides functions to support SDR. Later, Universal Software Radio Peripheral (USRP) was developed as a low cost, high-speed SDR platform. USRP in conjunction with GNU Radio is a very powerful tool to develop SDR based wireless communication system. This paper discusses the employment of GNU Radio and USRP for developing software based wireless transmission system. Furthermore, retransmission scheme, buffering and Leaky Bucket Algorithm are implemented to solve the transmission error and environment interference problems found during the implementation.",
"title": ""
},
{
"docid": "1e838d80ecd8eba0f076c72d52feeb2d",
"text": "In this paper, we present an approach for graph signal representation of EEG toward deep learning-based modeling. In order to overcome the low dimensionality and spatial resolution of EEG, our approach divides the EEG signal into multiple frequency bands, builds an intra-band graph for each of them, and merges them with inter-band connectivity to obtain rich graph representation. The signal features on the vertices are also obtained from EEG. Finally, the graph signals are learned with graph convolutional neural networks. Experimental results on visual content identification using EEG are presented and various ways of defining intra-band and inter-band connections are examined.",
"title": ""
},
{
"docid": "461ee7b6a61a6d375a3ea268081f80f5",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
},
{
"docid": "ca344cb8317348f4b56d8f760784b999",
"text": "Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.",
"title": ""
},
{
"docid": "e4d098324a92421598035b64ff4d8392",
"text": "Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.",
"title": ""
},
{
"docid": "b8e2bd6e7a852f3995813397920ababf",
"text": "ion of the a-proton from acetyl-CoA by Asp375, creating the enolate form. It was long believed that this was converted into an enol (e.g., by proton transfer from His274), but several computer modeling studies (in particular using high-level QM/MM methods) indicated that the enolate is the true transient intermediate species in the enzyme reaction (e.g., Mulholland et al. 2000; Van der Kamp et al. 2010). The enolate form is stabilized by electrostatic interactions in the enzyme active site, which include conventional hydrogen bonds from His274 and a conserved water molecule to the enolate oxygen (Fig. 2); no “lowbarrier hydrogen bonds” are involved (Mulholland et al. 2000). When the enolate intermediate is formed, the carbonyl carbon of oxaloacetate can undergo a nucleophilic attack. Citryl-CoA is formed as an intermediate, which requires proton donation. Initially, it was suggested that His320 donated the proton, but high-level QM/MM studies indicate that donation by Arg329 is most likely (Van der Kamp et al. 2008) (Fig. 2). This unusual role of an arginine as proton donor probably prevents overstabilization of the citryl-CoA intermediate and may trigger opening of the enzyme active site (vide supra), which is likely to be important for hydrolysis. Through its involvement in catalysis, Arg329 thereby is proposed to provide a mechanism for coupling condensation and hydrolysis in citrate synthase, and for coupling the chemical and conformational changes during the catalytic cycle (Van der Kamp et al. 2008). Citryl-CoA subsequently undergoes hydrolysis to form citrate and CoA-SH. Asp375 is implicated to play a role in this step, but the precise mechanism for this step is as yet unknown. The breaking of the thioester-linkage is energetically very favorable, which helps to drive the reaction in the forward direction, making it possible for the citric acid cycle to turn over, even with the typically low concentration of oxaloacetate in vivo (Voet and Voet 2011).",
"title": ""
},
{
"docid": "d548f1b5593109d68c9f9167d18909ed",
"text": "| Recently, the development of three-dimensional large-scale integration (3D-LSI) has been accelerated. Its stage has changed from the research level or limited production level to the investigation level with a view to mass production [1]–[10]. The 3D-LSI using through-silicon via (TSV) has the simplest structure and is expected to realize a high-performance, highfunctionality, and high-density LSI cube. This paper describes the current and future 3D-LSI technologies with TSV.",
"title": ""
},
{
"docid": "cfde4e719601cf861addeac2b1ce2d81",
"text": "In the face of a growing workload and dwindling resources, the US National Library of Medicine (NLM) created the Indexing Initiative project in the mid-1990s. This cross-library team’s mission is to explore indexing methodologies that can help ensure that MEDLINE and other NLM document collections maintain their quality and currency and thereby contribute to NLM’s mission of maintaining quality access to the biomedical literature. The NLM Medical Text Indexer (MTI) is the main product of this project and has been providing indexing recommendations based on the Medical Subject Headings (MeSH) vocabulary since 2002. In 2011, NLM expanded MTI’s role by designating it as the first-line indexer (MTIFL) for a few journals; today the MTIFL workflow includes about 100 journals and continues to increase. Due to a close collaboration with the Index Section at NLM, MTI continues to grow and expand its ability to provide assistance to the indexers. This paper provides an overview of MTI’s functionality, performance, and its evolution over the years.",
"title": ""
},
{
"docid": "79d044e9d88a510d9ae547bb1048edc0",
"text": "TimeStream is a distributed system designed specifically for low-latency continuous processing of big streaming data on a large cluster of commodity machines. The unique characteristics of this emerging application domain have led to a significantly different design from the popular MapReduce-style batch data processing. In particular, we advocate a powerful new abstraction called resilient substitution that caters to the specific needs in this new computation model to handle failure recovery and dynamic reconfiguration in response to load changes. Several real-world applications running on our prototype have been shown to scale robustly with low latency while at the same time maintaining the simple and concise declarative programming model. TimeStream handles an on-line advertising aggregation pipeline at a rate of 700,000 URLs per second with a 2-second delay, while performing sentiment analysis of Twitter data at a peak rate close to 10,000 tweets per second, with approximately 2-second delay.",
"title": ""
},
{
"docid": "7fc6e701aacc7d014916b9b47b01be16",
"text": "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.",
"title": ""
},
{
"docid": "572cdf84eebfe5bf28d137ce5c4179d4",
"text": "Stock market decision making is a very challenging and difficult task of financial data prediction. Prediction about stock market with high accuracy movement yield profit for investors of the stocks. Because of the complexity of stock market financial data, development of efficient models for prediction decision is very difficult, and it must be accurate. This study attempted to develop models for prediction of the stock market and to decide whether to buy/hold the stock using data mining and machine learning techniques. The classification techniques used in these models are naive bayes and random forest classification. Technical indicators are calculated from the stock prices based on time-line data and it is used as inputs of the proposed prediction models. 10 years of stock market data has been used for prediction. Based on the data set, these models are capable to generate buy/hold signal for stock market as a output. The main goal of this paper is to generate decision as per user’s requirement like amount to be invested, time duration for investment, minimum profit, maximum loss using machine learning and data analysis techniques.",
"title": ""
}
] |
scidocsrr
|
73029a1266cec9efb2777e1f915c7c94
|
Predictive positioning and quality of service ridesharing for campus mobility on demand systems
|
[
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
}
] |
[
{
"docid": "a75a8a6a149adf80f6ec65dea2b0ec0d",
"text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.",
"title": ""
},
{
"docid": "387e02e65ff994691ae8ae95b7c7f69c",
"text": "Real world data sets usually have many features, which increases the complexity of data mining task. Feature selection, as a preprocessing step to the data mining, has been shown very effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. To find the optimal feature subsets is the aim of feature selection. Rough sets theory provides a mathematical approach to find optimal feature subset, but this approach is time consuming. In this paper, we propose a novel heuristic algorithm based on rough sets theory to find out the feature subset. This algorithm employs appearing frequency of attribute as heuristic information. Experiment results show in most times our algorithm can find out optimal feature subset quickly and efficiently.",
"title": ""
},
{
"docid": "209ff14abd0b16496af29c143b0fa274",
"text": "Automated text categorization is an important technique for many web applications, such as document indexing, document filtering, and cataloging web resources. Many different approaches have been proposed for the automated text categorization problem. Among them, centroid-based approaches have the advantages of short training time and testing time due to its computational efficiency. As a result, centroid-based classifiers have been widely used in many web applications. However, the accuracy of centroid-based classifiers is inferior to SVM, mainly because centroids found during construction are far from perfect locations.\n We design a fast Class-Feature-Centroid (CFC) classifier for multi-class, single-label text categorization. In CFC, a centroid is built from two important class distributions: inter-class term index and inner-class term index. CFC proposes a novel combination of these indices and employs a denormalized cosine measure to calculate the similarity score between a text vector and a centroid. Experiments on the Reuters-21578 corpus and 20-newsgroup email collection show that CFC consistently outperforms the state-of-the-art SVM classifiers on both micro-F1 and macro-F1 scores. Particularly, CFC is more effective and robust than SVM when data is sparse.",
"title": ""
},
{
"docid": "d54ad1a912a0b174d1f565582c6caf1c",
"text": "This paper presents a new novel design of a smart walker for rehabilitation purpose by patients in hospitals and rehabilitation centers. The design features a full frame walker that provides secured and stable support while being foldable and compact. It also has smart features such as telecommunication and patient activity monitoring.",
"title": ""
},
{
"docid": "a8f86ab8e448fe7e69e988de67668b96",
"text": "Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets.",
"title": ""
},
{
"docid": "a7373d69f5ff9d894a630cc240350818",
"text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.",
"title": ""
},
{
"docid": "b6aa2f8fcbddb651207b4207f676320d",
"text": "Test coverage prediction for board assemblies has an important function in, among others, test engineering, test cost modeling, test strategy definition and product quality estimation. Introducing a method that defines how this coverage is calculated can increase the value of such prediction across the electronics industry. There are three aspects to test coverage calculation: fault modeling, coverage-per-fault and total coverage. An abstraction level for fault categories is introduced, called MPS (material, placement, soldering) that enables us to compare coverage results using different fault models. Additionally, the rule-based fault coverage estimation and the weighted coverage calculation are discussed. This paper was submitted under the ITC Special Board and System Test Call-for-Papers that had an extended due-date. As such, the full text of the paper was not available in time for inclusion in the general volume of the 2003 ITC Proceedings. The full text is available in 2003 ITC Proceedings— Board and System Test Track. ITC INTERNATIONAL TEST CONFERENCE Proceedings of the International Test Conference 2003 (ITC’03) 1089-3539/03 $ 17.00 © 2003 IEEE",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "f355ed837561186cff4e7492470d6ae7",
"text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y",
"title": ""
},
{
"docid": "76262c43c175646d7a00e02a7a49ab81",
"text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.",
"title": ""
},
{
"docid": "e415deac22afd9221995385e681b7f63",
"text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.",
"title": ""
},
{
"docid": "3c999f3104ac98b010a2147c7b8ddaa0",
"text": "Many Big Data technologies were built to enable the processing of human generated data, setting aside the enormous amount of data generated from Machine-to-Machine (M2M) interactions. M2M interactions create real-time data streams that are much more structured, often in the form of series of event occurrences. In this paper, we provide an overview on the main research issues confronted by existing Complex Event Processing (CEP) techniques, as a starting point for Big Data applications that enable the monitoring of complex event occurrences in M2M interactions.",
"title": ""
},
{
"docid": "77a156afb22bbecd37d0db073ef06492",
"text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.",
"title": ""
},
{
"docid": "8308358ee1d9040b3f62b646edcc8578",
"text": "The application of GaN on SiC technology to wideband power amplifier MMICs is explored. The unique characteristics of GaN on SiC applied to reactively matched and distributed wideband circuit topologies are discussed, including comparison to GaAs technology. A 2 – 18 GHz 11W power amplifier MMIC is presented as an example.",
"title": ""
},
{
"docid": "29495e389441ff61d5efad10ad38e995",
"text": "The natural world is infinitely diverse, yet this diversity arises from a relatively small set of coherent properties and rules, such as the laws of physics or chemistry. We conjecture that biological intelligent systems are able to survive within their diverse environments by discovering the regularities that arise from these rules primarily through unsupervised experiences, and representing this knowledge as abstract concepts. Such representations possess useful properties of compositionality and hierarchical organisation, which allow intelligent agents to recombine a finite set of conceptual building blocks into an exponentially large set of useful new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such concepts in the visual domain. We first use the previously published β-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association. Our approach requires very few pairings between symbols and images and makes no assumptions about the choice of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of compositional visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to invent and learn novel visual concepts through recombination of the few learnt concepts.",
"title": ""
},
{
"docid": "12344e450dbfba01476353e38f83358f",
"text": "This paper explores four issues that have emerged from the research on social, cognitive and teaching presence in an online community of inquiry. The early research in the area of online communities of inquiry has raised several issues with regard to the creation and maintenance of social, cognitive and teaching presence that require further research and analysis. The other overarching issue is the methodological validity associated with the community of inquiry framework. The first issue is about shifting social presence from socio-emotional support to a focus on group cohesion (from personal to purposeful relationships). The second issue concerns the progressive development of cognitive presence (inquiry) from exploration to resolution. That is, moving discussion beyond the exploration phase. The third issue has to do with how we conceive of teaching presence (design, facilitation, direct instruction). More specifically, is there an important distinction between facilitation and direct instruction? Finally, the methodological issue concerns qualitative transcript analysis and the validity of the coding protocol.",
"title": ""
},
{
"docid": "9b96a97426917b18dab401423e777b92",
"text": "Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).",
"title": ""
},
{
"docid": "0b12d6a973130f7317956326320ded03",
"text": "We present simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous distribution over R. The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the ‘generalized nearest-neighbor’ graph of the sample and the empirical copula of the sample respectively. For the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is Lipschitz continuous. Experiments demonstrate their usefulness in independent subspace analysis.",
"title": ""
},
{
"docid": "e9ff17015d40f5c6dd5091557f336f43",
"text": "Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight.",
"title": ""
}
] |
scidocsrr
|
dfa37f61a1e9fd66981f5ad550705234
|
Visualizing Bitcoin Flows of Ransomware: WannaCry One Week Later
|
[
{
"docid": "32ca9711622abd30c7c94f41b91fa3f6",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
}
] |
[
{
"docid": "8ebff9573757d0b79236b35e42a3a7c6",
"text": "Joint multichannel enhancement and acoustic modeling using neural networks has shown promise over the past few years. However, one shortcoming of previous work [1, 2, 3] is that the filters learned during training are fixed for decoding, potentially limiting the ability of these models to adapt to previously unseen or changing conditions. In this paper we explore a neural network adaptive beamforming (NAB) technique to address this issue. Specifically, we use LSTM layers to predict time domain beamforming filter coefficients at each input frame. These filters are convolved with the framed time domain input signal and summed across channels, essentially performing FIR filter-andsum beamforming using the dynamically adapted filter. The beamformer output is passed into a waveform CLDNN acoustic model [4] which is trained jointly with the filter prediction LSTM layers. We find that the proposed NAB model achieves a 12.7% relative improvement in WER over a single channel model [4] and reaches similar performance to a “factored” model architecture which utilizes several fixed spatial filters [3] on a 2,000-hour Voice Search task, with a 17.9% decrease in computational cost.",
"title": ""
},
{
"docid": "62b2daec701f43a3282076639d01e475",
"text": "Several hundred plant and herb species that have potential as novel antiviral agents have been studied, with surprisingly little overlap. A wide variety of active phytochemicals, including the flavonoids, terpenoids, lignans, sulphides, polyphenolics, coumarins, saponins, furyl compounds, alkaloids, polyines, thiophenes, proteins and peptides have been identified. Some volatile essential oils of commonly used culinary herbs, spices and herbal teas have also exhibited a high level of antiviral activity. However, given the few classes of compounds investigated, most of the pharmacopoeia of compounds in medicinal plants with antiviral activity is still not known. Several of these phytochemicals have complementary and overlapping mechanisms of action, including antiviral effects by either inhibiting the formation of viral DNA or RNA or inhibiting the activity of viral reproduction. Assay methods to determine antiviral activity include multiple-arm trials, randomized crossover studies, and more compromised designs such as nonrandomized crossovers and pre- and post-treatment analyses. Methods are needed to link antiviral efficacy/potency- and laboratory-based research. Nevertheless, the relative success achieved recently using medicinal plant/herb extracts of various species that are capable of acting therapeutically in various viral infections has raised optimism about the future of phyto-antiviral agents. As this review illustrates, there are innumerable potentially useful medicinal plants and herbs waiting to be evaluated and exploited for therapeutic applications against genetically and functionally diverse viruses families such as Retroviridae, Hepadnaviridae and Herpesviridae",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "b72bc9ee1c32ec3d268abd1d3e51db25",
"text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.",
"title": ""
},
{
"docid": "e3537eb7ab5da891aea70306c548f8c6",
"text": "In recent era of ubiquitous computing the internet of things and sensor networks are researched widely. The deployment of the wireless sensor networks in the harsh environments ascends issues associated with delay clustering approaches, packet drop, delay, energy, link quality, mobility and coverage. Various research studies are proposing routing protocols clustering algorithm with research goal for reduction in terms of energy and delay. This paper focuses on delay and energy by introducing threshold based scheme. Furthermore energy and delay efficient routing protocol is proposed for cluster head selection in the heterogeneous wireless sensor networks. We have introduced delay and energy based adaptive threshold scheme in this paper to solve this problem. Furthermore this study presents new routing algorithm which contains energy and delay and velocity threshold based cluster-head election scheme. The cluster head is selected according to distance, velocity and energy where probability is set for the residual energy. The nodes are classified into normal, advanced and herculean levels. This paper presents new routing protocol named as energy and delay efficient routing protocol (EDERP). The MATLAB is used for simulation and comparison of the routing protocol with other protocols. The simulations results indicate that this protocol is effective regarding delay and energy.",
"title": ""
},
{
"docid": "ff72ade7fdfba55c0f6ab7b5f8b74eb7",
"text": "Automatic detection of facial features in an image is important stage for various facial image interpretation work, such as face recognition, facial expression recognition, 3Dface modeling and facial features tracking. Detection of facial features like eye, pupil, mouth, nose, nostrils, lip corners, eye corners etc., with different facial expression and illumination is a challenging task. In this paper, we presented different methods for fully automatic detection of facial features. Viola-Jones' object detector along with haar-like cascaded features are used to detect face, eyes and nose. Novel techniques using the basic concepts of facial geometry, are proposed to locate the mouth position, nose position and eyes position. The estimation of detection region for features like eye, nose and mouth enhanced the detection accuracy significantly. An algorithm, using the H-plane of the HSV color space is proposed for detecting eye pupil from the eye detected region. FEI database of frontal face images is mainly used to test the algorithm. Proposed algorithm is tested over 100 frontal face images with two different facial expression (neutral face and smiling face). The results obtained are found to be 100% accurate for lip, lip corners, nose and nostrils detection. The eye corners, and eye pupil detection is giving approximately 95% accurate results.",
"title": ""
},
{
"docid": "3d93c45e2374a7545c6dff7de0714352",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "425bbea2a6aff317c83e73738bca89ed",
"text": "Classical rate-distortion theory requires specifying a source distribution. Instead, we analyze rate-distortion properties of individual objects using the recently developed algorithmic rate-distortion theory. The latter is based on the noncomputable notion of Kolmogorov complexity. To apply the theory we approximate the Kolmogorov complexity by standard data compression techniques, and perform a number of experiments with lossy compression and denoising of objects from different domains. We also introduce a natural generalization to lossy compression with side information. To maintain full generality we need to address a difficult searching problem. While our solutions are therefore not time efficient, we do observe good denoising and compression performance.",
"title": ""
},
{
"docid": "dc41eb4913c47c4b64d3ca4c1dac6e8d",
"text": "Applied Geostatistics with SGeMS: A User's Guide PetraSim: A graphical user interface for the TOUGH2 family of multiphase flow and transport codes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Treatise on Fungi as Experimental Systems for Basic and Applied Research. Baixe grátis o arquivo SGeMS User's Guide enviado para a disciplina de Applied Geostatistics with SGeMS: A Users' Guide · S-GeMS Tutorial Notes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Leone: Introduction to Stochastic Calculus Applied to Finance, Second Edition. Build Native Cross-Platform Apps with Appcelerator: A beginner's guide for Web Developers Applied GeostAtistics with SGeMS: A User's guide (Repost).",
"title": ""
},
{
"docid": "4f686e9f37ec26070d0d280b98f78673",
"text": "State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.",
"title": ""
},
{
"docid": "0958001e0a54cd0d3dc20864e65cf2a8",
"text": "Credit card fraud resulted in the loss of $3 billion to North American financial institutions in 2017. The rise of digital payments systems such as Apple Pay, Android Pay, and Venmo has meant that loss due to fraudulent activity is expected to increase. Deep Learning presents a promising solution to the problem of credit card fraud detection by enabling institutions to make optimal use of their historic customer data as well as real-time transaction details that are recorded at the time of the transaction. In 2017, a study found that a Deep Learning approach provided comparable results to prevailing fraud detection methods such as Gradient Boosted Trees and Logistic Regression. However, Deep Learning encompasses a number of topologies. Additionally, the various parameters used to construct the model (e.g. the number of neurons in the hidden layer of a neural network) also influence its results. In this paper, we evaluate a subsection of Deep Learning topologies — from the general artificial neural network to topologies with built-in time and memory components such as Long Short-term memory — and different parameters with regard to their efficacy in fraud detection on a dataset of nearly 80 million credit card transactions that have been pre-labeled as fraudulent and legitimate. We utilize a high performance, distributed cloud computing environment to navigate past common fraud detection problems such as class imbalance and scalability. Our analysis provides a comprehensive guide to sensitivity analysis of model parameters with regard to performance in fraud detection. We also present a framework for parameter tuning of Deep Learning topologies for credit card fraud detection to enable financial institutions to reduce losses by preventing fraudulent activity.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "68489ec6e39ffd95d5df7d6817474cde",
"text": "Foster B-trees are a new variant of B-trees that combines advantages of prior B-tree variants optimized for many-core processors and modern memory hierarchies with flash storage and nonvolatile memory. Specific goals include: (i) minimal concurrency control requirements for the data structure, (ii) efficient migration of nodes to new storage locations, and (iii) support for continuous and comprehensive self-testing. Like Blink-trees, Foster B-trees optimize latching without imposing restrictions or specific designs on transactional locking, for example, key range locking. Like write-optimized B-trees, and unlike Blink-trees, Foster B-trees enable large writes on RAID and flash devices as well as wear leveling and efficient defragmentation. Finally, they support continuous and inexpensive yet comprehensive verification of all invariants, including all cross-node invariants of the B-tree structure. An implementation and a performance evaluation show that the Foster B-tree supports high concurrency and high update rates without compromising consistency, correctness, or read performance.",
"title": ""
},
{
"docid": "9f84630422777d869edd7167ff6da443",
"text": "Video surveillance, closed-circuit TV and IP-camera systems became virtually omnipresent and indispensable for many organizations, businesses, and users. Their main purpose is to provide physical security, increase safety, and prevent crime. They also became increasingly complex, comprising many communication means, embedded hardware and non-trivial firmware. However, most research to date focused mainly on the privacy aspects of such systems, and did not fully address their issues related to cyber-security in general, and visual layer (i.e., imagery semantics) attacks in particular. In this paper, we conduct a systematic review of existing and novel threats in video surveillance, closed-circuit TV and IP-camera systems based on publicly available data. The insights can then be used to better understand and identify the security and the privacy risks associated with the development, deployment and use of these systems. We study existing and novel threats, along with their existing or possible countermeasures, and summarize this knowledge into a comprehensive table that can be used in a practical way as a security checklist when assessing cyber-security level of existing or new CCTV designs and deployments. We also provide a set of recommendations and mitigations that can help improve the security and privacy levels provided by the hardware, the firmware, the network communications and the operation of video surveillance systems. We hope the findings in this paper will provide a valuable knowledge of the threat landscape that such systems are exposed to, as well as promote further research and widen the scope of this field beyond its current boundaries.",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "4e0ff4875a4dff6863734c964db54540",
"text": "We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books (“Anonymous audio book service”), Mobile Apps, Video and Music. It produces recommendations based on user consumption history: purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and autoencoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories. Finally, we show online A/B test results, discuss key improvements to the neural network model, and describe our production pipeline.",
"title": ""
},
{
"docid": "76c6ad5e97d5296a9be841c3d3552a27",
"text": "In fish as in mammals, virus infections induce changes in the expression of many host genes. Studies conducted during the last fifteen years revealed a major contribution of the interferon system in fish antiviral response. This review describes the screening methods applied to compare the impact of virus infections on the transcriptome in different fish species. These approaches identified a \"core\" set of genes that are strongly induced in most viral infections. The \"core\" interferon-induced genes (ISGs) are generally conserved in vertebrates, some of them inhibiting a wide range of viruses in mammals. A selection of ISGs -PKR, vig-1/viperin, Mx, ISG15 and finTRIMs - is further analyzed here to illustrate the diversity and complexity of the mechanisms involved in establishing an antiviral state. Most of the ISG-based pathways remain to be directly determined in fish. Fish ISGs are often duplicated and the functional specialization of multigenic families will be of particular interest for future studies.",
"title": ""
},
{
"docid": "e089c8d35bd77e1947d11207a7905617",
"text": "Real-time monitoring of groups and their rich contexts will be a key building block for futuristic, group-aware mobile services. In this paper, we propose GruMon, a fast and accurate group monitoring system for dense and complex urban spaces. GruMon meets the performance criteria of precise group detection at low latencies by overcoming two critical challenges of practical urban spaces, namely (a) the high density of crowds, and (b) the imprecise location information available indoors. Using a host of novel features extracted from commodity smartphone sensors, GruMon can detect over 80% of the groups, with 97% precision, using 10 minutes latency windows, even in venues with limited or no location information. Moreover, in venues where location information is available, GruMon improves the detection latency by up to 20% using semantic information and additional sensors to complement traditional spatio-temporal clustering approaches. We evaluated GruMon on data collected from 258 shopping episodes from 154 real participants, in two large shopping complexes in Korea and Singapore. We also tested GruMon on a large-scale dataset from an international airport (containing ≈37K+ unlabelled location traces per day) and a live deployment at our university, and showed both GruMon's potential performance at scale and various scalability challenges for real-world dense environment deployments.",
"title": ""
},
{
"docid": "9b4ffbbcd97e94524d2598cd862a400a",
"text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.",
"title": ""
},
{
"docid": "5b92aa85d93c2fbb09df5a0b96fc9c1f",
"text": "Social networking services have been prevalent at many online communities such as Twitter.com and Weibo.com, where millions of users keep interacting with each other every day. One interesting and important problem in the social networking services is to rank users based on their vitality in a timely fashion. An accurate ranking list of user vitality could benefit many parties in social network services such as the ads providers and site operators. Although it is very promising to obtain a vitality-based ranking list of users, there are many technical challenges due to the large scale and dynamics of social networking data. In this paper, we propose a unique perspective to achieve this goal, which is quantifying user vitality by analyzing the dynamic interactions among users on social networks. Examples of social network include but are not limited to social networks in microblog sites and academical collaboration networks. Intuitively, if a user has many interactions with his friends within a time period and most of his friends do not have many interactions with their friends simultaneously, it is very likely that this user has high vitality. Based on this idea, we develop quantitative measurements for user vitality and propose our first algorithm for ranking users based vitality. Also, we further consider the mutual influence between users while computing the vitality measurements and propose the second ranking algorithm, which computes user vitality in an iterative way. Other than user vitality ranking, we also introduce a vitality prediction problem, which is also of great importance for many applications in social networking services. Along this line, we develop a customized prediction model to solve the vitality prediction problem. To evaluate the performance of our algorithms, we collect two dynamic social network data sets. The experimental results with both data sets clearly demonstrate the advantage of our ranking and prediction methods.",
"title": ""
}
] |
scidocsrr
|
c8cea0a6f9cba8d78b765f06fe6972be
|
Screening in New Credit Markets : Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending ?
|
[
{
"docid": "abf845c459ed415ac77ba91615d7b674",
"text": "We study the online market for peer-to-peer (P2P) lending, in which individuals bid on unsecured microloans sought by other individual borrowers. Using a large sample of consummated and failed listings from the largest online P2P lending marketplace Prosper.com, we test whether social networks lead to better lending outcomes, focusing on the distinction between the structural and relational aspects of networks. While the structural aspects have limited to no significance, the relational aspects are consistently significant predictors of lending outcomes, with a striking gradation based on the verifiability and visibility of a borrower’s social capital. Stronger and more verifiable relational network measures are associated with a higher likelihood of a loan being funded, a lower risk of default, and lower interest rates. We discuss the implications of our findings for financial disintermediation and the design of decentralized electronic lending markets. This version: October 2009 ∗Decision, Operations and Information Technologies Department, **Finance Department. All the authors are at Robert H. Smith School of Business, University of Maryland, College Park, MD 20742. Mingfeng Lin can be reached at mingfeng@rhsmith.umd.edu. Prabhala can be reached at prabhala@rhsmith.umd.edu. Viswanathan can be reached at sviswana@rhsmith.umd.edu. The authors thank Ethan Cohen-Cole, Sanjiv Das, Jerry Hoberg, Dalida Kadyrzhanova, Nikunj Kapadia, De Liu, Vojislav Maksimovic, Gordon Phillips, Kislaya Prasad, Galit Shmueli, Kelly Shue, and seminar participants at Carnegie Mellon University, University of Utah, the 2008 Summer Doctoral Program of the Oxford Internet Institute, the 2008 INFORMS Annual Conference, the Workshop on Information Systems and Economics (Paris), and Western Finance Association for their valuable comments and suggestions. Mingfeng Lin also thanks to the Ewing Marion Kauffman Foundation for the 2009 Dissertation Fellowship Award, and to the Economic Club of Washington D.C. (2008) for their generous financial support. We also thank Prosper.com for making the data for the study available. The contents of this publication are the sole responsibility of the authors. Judging Borrowers By The Company They Keep: Social Networks and Adverse Selection in Online Peer-to-Peer Lending",
"title": ""
}
] |
[
{
"docid": "a71c73e4828cf506883a717c99949c93",
"text": "The application of machine learning for the detection of malicious network traffic has been well researched over the past several decades; it is particularly appealing when the traffic is encrypted because traditional pattern-matching approaches cannot be used. Unfortunately, the promise of machine learning has been slow to materialize in the network security domain. In this paper, we highlight two primary reasons why this is the case: inaccurate ground truth and a highly non-stationary data distribution. To demonstrate and understand the effect that these pitfalls have on popular machine learning algorithms, we design and carry out experiments that show how six common algorithms perform when confronted with real network data. With our experimental results, we identify the situations in which certain classes of algorithms underperform on the task of encrypted malware traffic classification. We offer concrete recommendations for practitioners given the real-world constraints outlined. From an algorithmic perspective, we find that the random forest ensemble method outperformed competing methods. More importantly, feature engineering was decisive; we found that iterating on the initial feature set, and including features suggested by domain experts, had a much greater impact on the performance of the classification system. For example, linear regression using the more expressive feature set easily outperformed the random forest method using a standard network traffic representation on all criteria considered. Our analysis is based on millions of TLS encrypted sessions collected over 12 months from a commercial malware sandbox and two geographically distinct, large enterprise networks.",
"title": ""
},
{
"docid": "481fdc6e27959922dd8d1508c7104c86",
"text": "This paper addresses load current sharing and circulating current issues of parallel-connected DC-DC converters in low-voltage DC microgrid. Droop control is the popular technique for load current sharing in DC microgrid. The main drawbacks of the conventional droop method are poor current sharing and drop in dc grid voltage due to the droop action. Circulating current issue will also arise due to mismatch in the converters output voltages. In this work, a figure of merit called droop index (DI) is introduced in order to improve the performance of dc microgrid, which is a function of normalized current sharing difference and losses in the output side of the converters. This proposed adaptive droop control method minimizes the circulating current and current sharing difference between the converters based on instantaneous virtual resistance Rdroop. Using Rdroop shifting, the proposed method also eliminates the tradeoff between current sharing difference and voltage regulation. The detailed analysis and design procedure are explained for two DC-DC boost converters connected in parallel. The effectiveness of the proposed method is verified by detailed simulation and experimental studies.",
"title": ""
},
{
"docid": "c253083ab44c842819059ad64781d51d",
"text": "RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals.",
"title": ""
},
{
"docid": "b672aa84da41b3887664562cc4334d56",
"text": "Wearable health monitoring systems have gained considerable interest in recent years owing to their tremendous promise for personal portable health watching and remote medical practices. The sensors with excellent flexibility and stretchability are crucial components that can provide health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous uncomfortableness and invasiveness. The signals acquired by these sensors, such as body motion, heart rate, breath, skin temperature and metabolism parameter, are closely associated with personal health conditions. This review attempts to summarize the recent progress in flexible and stretchable sensors, concerning the detected health indicators, sensing mechanisms, functional materials, fabrication strategies, basic and desired features. The potential challenges and future perspectives of wearable health monitoring system are also briefly discussed.",
"title": ""
},
{
"docid": "4373b838d10ac77127c3a7021fe4534c",
"text": "Fine-grained recognition concerns categorization at sub-ordinate levels, where the distinction between object classes is highly local. Compared to basic level recognition, fine-grained categorization can be more challenging as there are in general less data and fewer discriminative features. This necessitates the use of stronger prior for feature selection. In this work, we include humans in the loop to help computers select discriminative features. We introduce a novel online game called \"Bubbles\" that reveals discriminative features humans use. The player's goal is to identify the category of a heavily blurred image. During the game, the player can choose to reveal full details of circular regions (\"bubbles\"), with a certain penalty. With proper setup the game generates discriminative bubbles with assured quality. We next propose the \"Bubble Bank\" algorithm that uses the human selected bubbles to improve machine recognition performance. Experiments demonstrate that our approach yields large improvements over the previous state of the art on challenging benchmarks.",
"title": ""
},
{
"docid": "f4ddf0aa308769b77cf2a581b4136573",
"text": "Cetacea (dolphins, porpoises, and whales) is a clade of aquatic species that includes the most massive, deepest diving, and largest brained mammals. Understanding the temporal pattern of diversification in the group as well as the evolution of cetacean anatomy and behavior requires a robust and well-resolved phylogenetic hypothesis. Although a large body of molecular data has accumulated over the past 20 years, DNA sequences of cetaceans have not been directly integrated with the rich, cetacean fossil record to reconcile discrepancies among molecular and morphological characters. We combined new nuclear DNA sequences, including segments of six genes (~2800 basepairs) from the functionally extinct Yangtze River dolphin, with an expanded morphological matrix and published genomic data. Diverse analyses of these data resolved the relationships of 74 taxa that represent all extant families and 11 extinct families of Cetacea. The resulting supermatrix (61,155 characters) and its sub-partitions were analyzed using parsimony methods. Bayesian and maximum likelihood (ML) searches were conducted on the molecular partition, and a molecular scaffold obtained from these searches was used to constrain a parsimony search of the morphological partition. Based on analysis of the supermatrix and model-based analyses of the molecular partition, we found overwhelming support for 15 extant clades. When extinct taxa are included, we recovered trees that are significantly correlated with the fossil record. These trees were used to reconstruct the timing of cetacean diversification and the evolution of characters shared by \"river dolphins,\" a non-monophyletic set of species according to all of our phylogenetic analyses. The parsimony analysis of the supermatrix and the analysis of morphology constrained to fit the ML/Bayesian molecular tree yielded broadly congruent phylogenetic hypotheses. In trees from both analyses, all Oligocene taxa included in our study fell outside crown Mysticeti and crown Odontoceti, suggesting that these two clades radiated in the late Oligocene or later, contra some recent molecular clock studies. Our trees also imply that many character states shared by river dolphins evolved in their oceanic ancestors, contradicting the hypothesis that these characters are convergent adaptations to fluvial habitats.",
"title": ""
},
{
"docid": "7706afde38a6445ef0b0858e8e500159",
"text": "Clustering is a problem of great practical importance in numerous applications. The problem of clustering becomes more challenging when the data is categorical, that is, when there is no inherent distance measure between data values. We introduce LIMBO, a scalable hierarchical categorical clustering algorithm that builds on the Information Bottleneck (IB) framework for quantifying the relevant information preserved when clustering. As a hierarchical algorithm, LIMBO has the advantage that it can produce clusterings of different sizes in a single execution. We use the IB framework to define a distance measure for categorical tuples and we also present a novel distance measure for categorical attribute values. We show how the LIMBO algorithm can be used to cluster both tuples and values. LIMBO handles large data sets by producing a memory bounded summary model for the data. We present an experimental evaluation of LIMBO, and we study how clustering quality compares to other categorical clustering algorithms. LIMBO supports a trade-off between efficiency (in terms of space and time) and quality. We quantify this trade-off and demonstrate that LIMBO allows for substantial improvements in efficiency with negligible decrease in quality.",
"title": ""
},
{
"docid": "36fdd31b04f53f7aef27b9d4af5f479f",
"text": "Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.",
"title": ""
},
{
"docid": "a845a36fb352f347224e9902087d9625",
"text": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.",
"title": ""
},
{
"docid": "6bfcd3a40e8be718225d252dad8bf80a",
"text": "Twitter data offers an unprecedented opportunity to study demographic differences in public opinion across a virtually unlimited range of subjects. Whilst demographic attributes are often implied within user data, they are not always easily identified using computational methods. In this paper, we present a semi-automatic solution that combines automatic classification methods with a user interface designed to enable rapid resolution of ambiguous cases. TweetClass employs a two-step, interactive process to support the determination of gender and age attributes. At each step, the user is presented with feedback on the confidence levels of the automated analysis and can choose to refine ambiguous cases by examining key profile and content data. We describe how a user-centered design approach was used to optimise the interface and present the results of an evaluation which suggests that TweetClass can be used to rapidly boost demographic sample sizes in situations where high accuracy is required.",
"title": ""
},
{
"docid": "6082c0252dffe7903512e36f13da94eb",
"text": "Thousands of storage tanks in oil refineries have to be inspected manually to prevent leakage and/or any other potential catastrophe. A wall climbing robot with permanent magnet adhesion mechanism equipped with nondestructive sensor has been designed. The robot can be operated autonomously or manually. In autonomous mode the robot uses an ingenious coverage algorithm based on distance transform function to navigate itself over the tank surface in a back and forth motion to scan the external wall for the possible faults using sensors without any human intervention. In manual mode the robot can be navigated wirelessly from the ground station to any location of interest. Preliminary experiment has been carried out to test the prototype.",
"title": ""
},
{
"docid": "88862d86e43d491ec4368410a61c13fb",
"text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.",
"title": ""
},
{
"docid": "17df62e187d630e0f11ae37924f334ee",
"text": "Supply chains can be seen as cyber-physical networks grounded on object identification and tracking. Conventional trust models featuring centralized information management architectures and simplistic things classification lend two of the most relevant limitations to current solutions. Blockchain introduces novel and a valuable trust approaches while semantic technologies better permit a things description. This paper introduces a semantic-enhanced blockchain platform allowing a flexible object discovery. It is based on validation by consensus of smart contracts and adopt a semantic matchmaking between queries and object annotations expressed w.r.t. ontology models. Early experiments assess the good behaviour of the proposed framework.",
"title": ""
},
{
"docid": "f47ba00cf0ca7e5c88e20785c1fd3859",
"text": "Photovoltaic maximum power point tracker (MPPT) systems are commonly employed to maximize the photovoltaic output power, since it is strongly affected in accordance to the incident solar radiation, surface temperature and load-type changes. Basically, a MPPT system consists on a dc-dc converter (hardware) controlled by a tracking algorithm (software) and the combination of both, hardware and software, defines the tracking efficiency. This paper shows that even when the most accurate algorithm is employed, the maximum power point cannot be found, since its imposition as operation point depends on the dc-dc converter static feature and the load-type connected to the system output. For validating the concept, the main dc-dc converters, i.e., Boost, Buck-Boost, Cuk, SEPIC and Zeta are analyzed considering two load-types: resistive voltage regulated dc bus. Simulation and experimental results are included for validating the theoretical analysis.",
"title": ""
},
{
"docid": "8ffda1c743d0e7cf35e205e3ae8570d7",
"text": "Many day-to-day situations involve decision making: for example, a taxi company has some transportation tasks to be carried out, a large firm has to distribute a lot of complicated tasks among its subdivisions or subcontractors, and an air-traffic controller has to assign time slots to planes that are landing or taking off. Intelligent agentscan aid in this decision-making process. Agents are often classified into two categories according to the techniques they employ in their decision making: reactiveagents (cf. (Ferber and Drogoul, 1992)) base their next decision solely on their current sensory input; planning agents, on the other hand, take into account anticipated future developments — for instance as a result of their own actions — to decide on the most favourable course of action. When an agent should plan and when it should be reactive depends on the particular situation it finds itself in. Consider the example where an agent has to plan a route from one place to another. A reactive agent might use a compass to plot its course, whereas a planning agent would consult a map. Clearly, the planning agent will come up with the shortest route in most cases, as it won’t be confounded by uncrossable rivers, one-way streets, and labyrinthine city layouts. On the other hand, there are also situations where a reactive agent can at least be equally effective, for instance if there are no maps to consult, for instance in a domain of (Mars) exploration rovers. Nevertheless, the ability to plan ahead is invaluable in many domains, so in this paper we will focus on plan ing agents. The general structure of a planning problem is easy to explain: (the relevant part of) the world is in a certain state, but managers or directors would like it to be in another state. The (abstract) problem of how one should get from the current state of the world through a sequence of actions to the desired goal state is a planning problem. Ideally, to solve such planning problems, we would like to have a general planning-problem solver. However, such an algorithm solving all planning problems can be proven to be non-existing. 1 We therefore start to concentrate on a simplification of the general planning problem called ‘theclassical planning problem’. Although not all realistic problems can be modeled as a classical planning problem, they can help to solve more",
"title": ""
},
{
"docid": "1858df61cf8cd4f81371cb15df1dc1a1",
"text": "This paper presents the design, fabrication, and characterization of a multimodal sensor with integrated stretchable meandered interconnects for uniaxial strain, pressure, and uniaxial shear stress measurements. It is designed based on a capacitive sensing principle for embedded deformable sensing applications. A photolithographic process is used along with laser machining and sheet metal forming technique to pattern sensor elements together with stretchable grid-based interconnects on a thin sheet of copper polyimide laminate as a base material in a single process. The structure is embedded in a soft stretchable Ecoflex and PDMS silicon rubber encapsulation. The strain, pressure, and shear stress sensors are characterized up to 9%, 25 kPa, and ±11 kPa of maximum loading, respectively. The strain sensor exhibits an almost linear response to stretching with an average sensitivity of −28.9 fF%−1. The pressure sensor, however, shows a nonlinear and significant hysteresis characteristic due to nonlinear and viscoelastic property of the silicon rubber encapsulation. An average best-fit straight line sensitivity of 30.9 fFkPa−1 was recorded. The sensitivity of shear stress sensor is found to be 8.1 fFkPa−1. The three sensing elements also demonstrate a good cross-sensitivity performance of 3.1% on average. This paper proves that a common flexible printed circuit board (PCB) base material could be transformed into stretchable circuits with integrated multimodal sensor using established PCB fabrication technique, laser machining, and sheet metal forming method.",
"title": ""
},
{
"docid": "f6ba57b277beb545ad9b396404cd56b9",
"text": "The orbitofrontal cortex contains the secondary taste cortex, in which the reward value of taste is represented. It also contains the secondary and tertiary olfactory cortical areas, in which information about the identity and also about the reward value of odours is represented. The orbitofrontal cortex also receives information about the sight of objects from the temporal lobe cortical visual areas, and neurons in it learn and reverse the visual stimulus to which they respond when the association of the visual stimulus with a primary reinforcing stimulus (such as taste) is reversed. This is an example of stimulus-reinforcement association learning, and is a type of stimulus-stimulus association learning. More generally, the stimulus might be a visual or olfactory stimulus, and the primary (unlearned) positive or negative reinforcer a taste or touch. A somatosensory input is revealed by neurons that respond to the texture of food in the mouth, including a population that responds to the mouth feel of fat. In complementary neuroimaging studies in humans, it is being found that areas of the orbitofrontal cortex are activated by pleasant touch, by painful touch, by taste, by smell, and by more abstract reinforcers such as winning or losing money. Damage to the orbitofrontal cortex can impair the learning and reversal of stimulus-reinforcement associations, and thus the correction of behavioural responses when there are no longer appropriate because previous reinforcement contingencies change. The information which reaches the orbitofrontal cortex for these functions includes information about faces, and damage to the orbitofrontal cortex can impair face (and voice) expression identification. This evidence thus shows that the orbitofrontal cortex is involved in decoding and representing some primary reinforcers such as taste and touch; in learning and reversing associations of visual and other stimuli to these primary reinforcers; and in controlling and correcting reward-related and punishment-related behavior, and thus in emotion. The approach described here is aimed at providing a fundamental understanding of how the orbitofrontal cortex actually functions, and thus in how it is involved in motivational behavior such as feeding and drinking, in emotional behavior, and in social behavior.",
"title": ""
},
{
"docid": "b55a314aea8914db8705cd3974c862bb",
"text": "This study examines the mediating effect of perceived usefulness on the relationship between tax service quality (correctness, response time, system support) and continuance usage intention of e-filing system in Malaysia. A total of 116 data was analysed using Partial Least Squared Method (PLS). The result showed that Perceived Usefulness has a partial mediating effect on the relationship between tax service quality (Correctness, Response Time) with the continuance usage intention and tax service quality (correctness) has significant positive relationship with continuance usage intention. Perceived usefulness was found to be the most important predictor of continuance usage intention.",
"title": ""
},
{
"docid": "c1438dc6c58e1b25827c4291b5ad35e3",
"text": "We proposed a Deep Self-Organizing Map (DSOM) algorithm which is completely different from the existing multi-layers SOM algorithms, such as SOINN. It consists of layers of alternating self-organizing map and sampling operator. The self-organizing layer is made up of certain numbers of SOMs, with each map only looking at a local region block on its input. The winning neuron's index value from every SOM in self-organizing layer is then organized in the sampling layer to generate another 2D map, which could then be fed to a second self-organizing layer. In this way, local information is gathered together, forming more global information in higher layers. The construction method of the DSOM is unique and will be introduced in this paper. Experiments were carried out to discuss how the DSOM architecture parameters affect the performance. We evaluate our proposed DSOM on MNIST and CASIA-HWDB1.1 dataset. Experimental results show that DSOM outperforms the original supervised SOM by 7:17% on MNIST and 7:25% on CASIA-HWDB1.1.",
"title": ""
},
{
"docid": "2c4fed71ee9d658516b017a924ad6589",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
}
] |
scidocsrr
|
9e40dd86e0d6e534bfc5b0bd3dbb5b04
|
Personalized defect prediction
|
[
{
"docid": "dc66c80a5031c203c41c7b2908c941a3",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
}
] |
[
{
"docid": "64d755d95353a66ec967c7f74aaf2232",
"text": "Purpose: Platinum-based drugs, in particular cisplatin (cis-diamminedichloridoplatinum(II), CDDP), are used for treatment of squamous cell carcinoma of the head and neck (SCCHN). Despite initial responses, CDDP treatment often results in chemoresistance, leading to therapeutic failure. The role of primary resistance at subclonal level and treatment-induced clonal selection in the development of CDDP resistance remains unknown.Experimental Design: By applying targeted next-generation sequencing, fluorescence in situ hybridization, microarray-based transcriptome, and mass spectrometry-based phosphoproteome analysis to the CDDP-sensitive SCCHN cell line FaDu, a CDDP-resistant subline, and single-cell derived subclones, the molecular basis of CDDP resistance was elucidated. The causal relationship between molecular features and resistant phenotypes was determined by siRNA-based gene silencing. The clinical relevance of molecular findings was validated in patients with SCCHN with recurrence after CDDP-based chemoradiation and the TCGA SCCHN dataset.Results: Evidence of primary resistance at clonal level and clonal selection by long-term CDDP treatment was established in the FaDu model. Resistance was associated with aneuploidy of chromosome 17, increased TP53 copy-numbers and overexpression of the gain-of-function (GOF) mutant variant p53R248L siRNA-mediated knockdown established a causal relationship between mutant p53R248L and CDDP resistance. Resistant clones were also characterized by increased activity of the PI3K-AKT-mTOR pathway. The poor prognostic value of GOF TP53 variants and mTOR pathway upregulation was confirmed in the TCGA SCCHN cohort.Conclusions: Our study demonstrates a link of intratumoral heterogeneity and clonal evolution as important mechanisms of drug resistance in SCCHN and establishes mutant GOF TP53 variants and the PI3K/mTOR pathway as molecular targets for treatment optimization. Clin Cancer Res; 24(1); 158-68. ©2017 AACR.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "fd15f98ad6f43f6c5ee53f68a3d2cdc0",
"text": "In this paper, a new approach for hand tracking and gesture recognition based on the Leap Motion device and surface electromyography (SEMG) is presented. The system is about to process the depth image information and the electrical activity produced by skeletal muscles on forearm. The purpose of such combination is enhancement in the gesture recognition rate. As a first we analyse the conventional approaches toward hand tracking and gesture recognition and present the results of various researches. Successive topic gives brief overview of depth-sensing cameras with focus on Leap motion device where we test its accuracy of fingers recognition. The vision-SEMG-based system is to be potentially applicable to many areas of human computer interaction.",
"title": ""
},
{
"docid": "8e1b10ebb48b86ce151ab44dc0473829",
"text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.",
"title": ""
},
{
"docid": "b2f66e8508978c392045b5f9e99362a1",
"text": "In this paper we have proposed a linguistically informed recursive neural network architecture for automatic extraction of cause-effect relations from text. These relations can be expressed in arbitrarily complex ways. The architecture uses word level embeddings and other linguistic features to detect causal events and their effects mentioned within a sentence. The extracted events and their relations are used to build a causal-graph after clustering and appropriate generalization, which is then used for predictive purposes. We have evaluated the performance of the proposed extraction model with respect to two baseline systems,one a rule-based classifier, and the other a conditional random field (CRF) based supervised model. We have also compared our results with related work reported in the past by other authors on SEMEVAL data set, and found that the proposed bidirectional LSTM model enhanced with an additional linguistic layer performs better. We have also worked extensively on creating new annotated datasets from publicly available data, which we are willing to share with the community.",
"title": ""
},
{
"docid": "485f7998056ef7a30551861fad33bef4",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "a1f93bedbddefb63cd7ab7d030b4f3ee",
"text": "This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.",
"title": ""
},
{
"docid": "17192a9edb1e6eb3d9809d432d2d38bc",
"text": "Purpose This concept paper presents the process of constructing a language tailored to describing insider threat incidents, for the purposes of mitigating threats originating from legitimate users in an IT infrastructure. Various information security surveys indicate that misuse by legitimate (insider) users has serious implications for the health of IT environments. A brief discussion of survey data and insider threat concepts is followed by an overview of existing research efforts to mitigate this particular problem. None of the existing insider threat mitigation frameworks provide facilities for systematically describing the elements of misuse incidents, and thus all threat mitigation frameworks could benefit from the existence of a domain specific language for describing legitimate user actions. The paper presents a language development methodology which centres upon ways to abstract the insider threat domain and approaches to encode the abstracted information into language semantics. Due to lack of suitable insider case repositories, and the fact that most insider misuse frameworks have not been extensively implemented in practice, the aforementioned language construction methodology is based upon observed information security survey trends and the study of existing insider threat and intrusion specification frameworks. The development of a domain specific language goes through various stages of refinement that might eventually contradict these preliminary findings. Practical implications This paper summarizes the picture of the insider threat in IT infrastructures and provides a useful reference for insider threat modeling researchers by indicating ways to abstract insider threats. The problems of constructing insider threat signatures and utilizing them in insider threat models are also discussed.",
"title": ""
},
{
"docid": "911c101ed07b1c1aac05c3e8513c60c3",
"text": "The Modbus/TCP protocol is commonly used in SCADA systems for communications between a human–machine interface (HMI) and programmable logic controllers (PLCs). This paper presents a model-based intrusion detection system designed specifically for Modbus/TCP networks. The approach is based on the key observation that Modbus traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique deterministic finite automaton (DFA). An algorithm is presented that can automatically construct the DFA associated with an HMI-PLC channel based on about 100 captured messages. The resulting DFA-based intrusion detection system looks deep into Modbus/TCP packets and produces a very detailed traffic model. This approach is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach is tested on a production Modbus system. Despite its high sensitivity, the system has a very low false positive rate—perfect matches of the model to the traffic were observed for five of the seven PLCs tested without a single false alarm over 111 h of operation. Furthermore, the intrusion detection system successfully flagged real anomalies that were caused by technicians who were troubleshooting the HMI system. The system also helped identify a PLC that was configured incorrectly. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d07da03cde15fe7276f857832ae637af",
"text": "In recent years there is a growing interest in the study of sparse representation for signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. In this paper we propose a novel algorithm – the K-SVD algorithm – generalizing the K-Means clustering process, for adapting dictionaries in order to achieve sparse signal representations. We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real data.",
"title": ""
},
{
"docid": "e7a9584974596768d888d1d065135554",
"text": "Footwear is an integral part of daily life. Embedding sensors and electronics in footwear for various different applications started more than two decades ago. This review article summarizes the developments in the field of footwear-based wearable sensors and systems. The electronics, sensing technologies, data transmission, and data processing methodologies of such wearable systems are all principally dependent on the target application. Hence, the article describes key application scenarios utilizing footwear-based systems with critical discussion on their merits. The reviewed application scenarios include gait monitoring, plantar pressure measurement, posture and activity classification, body weight and energy expenditure estimation, biofeedback, navigation, and fall risk applications. In addition, energy harvesting from the footwear is also considered for review. The article also attempts to shed light on some of the most recent developments in the field along with the future work required to advance the field.",
"title": ""
},
{
"docid": "7cce3ad08afe6c35046da014d82fc1ef",
"text": "The developmental histories of 32 players in the Australian Football League (AFL), independently classified as either expert or less skilled in their perceptual and decision-making skills, were collected through a structured interview process and their year-on-year involvement in structured and deliberate play activities retrospectively determined. Despite being drawn from the same elite level of competition, the expert decision-makers differed from the less skilled in having accrued, during their developing years, more hours of experience in structured activities of all types, in structured activities in invasion-type sports, in invasion-type deliberate play, and in invasion activities from sports other than Australian football. Accumulated hours invested in invasion-type activities differentiated between the groups, suggesting that it is the amount of invasion-type activity that is experienced and not necessarily intent (skill development or fun) or specificity that facilitates the development of perceptual and decision-making expertise in this team sport.",
"title": ""
},
{
"docid": "d04229d3d53dffe44efacdaa8bb7ffde",
"text": "Electrically non-contact ECG measurement system on a chair can be applied to a number of various fields for continuous health monitoring in daily life. However, the body is floated electrically for this system due to the capacitive electrodes and the floated body is very sensitive to the external noises or motion artifacts which affect the measurement system as the common mode noise. In this paper, the driven-seat-ground circuit similar to the driven-right-leg circuit is proposed to reduce the common mode noise. The analysis of this equivalent circuit is performed and the output signal waveforms are compared between with driven-seat-ground and with capacitive ground. As the results, the driven-seat-ground circuit improves significantly the properties of the fully capacitive ECG measurement system as the negative feedback",
"title": ""
},
{
"docid": "1ad5568fd516295e1726a6f5c0c7ff29",
"text": "Although animal flight has a history of 300 million years, serious thought about human flight has a history of a few hundred years, dating from Leonardo da Vinci, 1 and successful human flight has only been achieved during the last 110 years. This is summarized in the attached figures 7.1-7.4. To some extent, this parallels the history of computing. Serious thought about computing dates back to Pascal and Leibnitz. While there was a notable attempt by Babbage to build a working computer in the 19 th century, successful electronic computers were finally achieved in the 40s, almost exactly contemporaneously with the development of the first successful jet aircraft. The early history of computers is summarized in figures 7.5-7.8. Tables 7.1 and 7.2 summarize the more recent progress in the development of supercomputers and microprocessors. Although airplane design had reached quite an advanced level by the 30s, exemplified by aircraft such as the DC-3 (Douglas Commercial-3) and the Spitfire (figure 7.2), the design of high speed aircraft requires an entirely new level of sophistication. This has led to a fusion of engineering, mathematics and computing, as indicated in figure 7.9.",
"title": ""
},
{
"docid": "2b8cf99331158bd7aea2958b1b64f741",
"text": "Purpose – The purpose of this paper is to understand blog users’ negative emotional norm compliance decision-making in crises (blog users’-NNDC). Design/methodology/approach – A belief– desire–intention (BDI) model to evaluate the blog users’-NNDC (the BDI-NNDC model) was developed. This model was based on three social characteristics: self-interests, expectations and emotions. An experimental study was conducted to evaluate the efficiency of the BDI-NNDC model by using data retrieved from a popular Chinese social network called “Sina Weibo” about three major crises. Findings – The BDI-NNDC model strongly predicted the Blog users’-NNDC. The predictions were as follows: a self-interested blog user posted content that was targeting his own interests; a blogger with high expectations wrote and commented emotionally negative blogs on the condition that the numbers of negative posts increased, while he ignored the norm when there was relatively less negative emotional news; and an emotional blog user obeyed the norm based on the emotional intentions of the blogosphere in most of the cases. Research limitations/implications – The BDI-NNDC model can explain the diffusion of negative emotions by blog users during crises, and this paper shows a way to bridge the social norm modelling and the research of blog users’ activity and behaviour characteristics in the context of “real life” crises. However, the criterion for differentiating blog users according to social characteristics needs to be further revised, as the generalizability of the results is limited by the number of cases selected in this study. Practical implications – The current method could be applied to predict emotional trends of blog users who have different social characteristics and it could support government agencies to build strategic responses to crises. The authors thank Mr Jon Walker and Ms Celia Zazo Seco in this work for their dedication and time. This paper is supported by the Key project of National Social Science Foundation under contract No. 13&ZD174; National Natural Science Foundation of China under contract No. 71273132, 71303111, 71471089, 71403121, 71503124 and 71503126; National Social Science Foundation under contract No. 15BTQ063; “Fundamental Research Funds for the Central Universities”, No: 30920140111006; Jiangsu “Qinlan” project (2016); Priority Academic Program Development of Jiangsu Higher Education Institutions; and Hubei Collaborative Innovation Center for Early Warning and Emergency Response Research project under contract JD20150401. The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-0473.htm",
"title": ""
},
{
"docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3",
"text": "3",
"title": ""
},
{
"docid": "3d5eb503f837adffb4468548b3f76560",
"text": "Purpose This study investigates the impact of such contingency factors as top management support, business vision, and external expertise, on the one hand, and ERP system success, on the other. Design/methodology/approach A conceptual model was developed and relevant hypotheses formulated. Surveys were conducted in two Northern European countries and a structural equation modeling technique was used to analyze the data. Originality/value It is argued that ERP systems are different from other IT implementations; as such, there is a need to provide insights as to how the aforementioned factors play out in the context of ERP system success evaluations for adopting organizations. As was predicted, the results showed that the three contingency factors positively influence ERP system success. More importantly, the relative importance of quality external expertise over the other two factors for ERP initiatives was underscored. The implications of the findings for both practitioners and researchers are discussed.",
"title": ""
},
{
"docid": "fe24debb8aadf4e01e14679afc5249df",
"text": "Traditional machine learning techniques have shown promising results in automating the process of identifying useful information in crisis-related data posted through micro-blogging services such as Twitter. More recently, deep learning techniques have also shown promise in the area of disaster response. In this paper, we focus on understanding the e ectiveness of deep neural networks by comparison with the e ectiveness of standard classifiers that use carefully engineered features. Specifically, we design various feature sets (based on tweet content, user details and polarity clues) and use these feature sets individually or in various combinations, with Naïve Bayes classifiers. Furthermore, we develop neural models based on Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) with handcrafted architectures. We compare the two types of approaches in the context of identifying informative tweets posted during disasters, and show that the deep neural networks, in particular the CNN networks, are more e ective for the task considered.",
"title": ""
},
{
"docid": "2e31e38fe00d4de7897e544b9aeebd6e",
"text": "Many researchers have conceptualized smoking uptake behavior in adolescence as progressing through a sequence of developmental stages. Multiple social, psychological, and biological factors influence this process, and may play different functions at different points in the progression, and play different roles for different people. The major objective of this paper is to review empirical studies of predictors of transitions in stages of smoking progression, and identify similarities and differences related to predictors of stages and transitions across studies. While a number of factors related to stage of progression replicated across studies, few variables uniquely predicted a particular stage or transition in smoking behavior. Subsequently, theoretical considerations related to stage conceptualization and measurement, inter-individual differences in intra-individual change, and the staged or continuous nature of smoking progression are discussed.",
"title": ""
},
{
"docid": "9c98023ef208a8c15515bd46737b056e",
"text": "Web usage Mining is an area of web mining which deals with the extraction of interesting knowledge from logging information produced by web server. Different data mining techniques can be applied on web usage data to extract user access patterns and this knowledge can be used in variety of applications such as system improvement, web site modification, business intelligence etc. Web usage mining requires data abstraction for pattern discovery. This data abstraction is achieved through data preprocessing. In this paper we survey about the data preprocessing activities like data cleaning, data reduction and related algorithms.",
"title": ""
}
] |
scidocsrr
|
a534083a312c26decd7372dd878dbcf6
|
A 3D Dynamic Scene Analysis Framework for Development of Intelligent Transportation Systems
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "f6647e82741dfe023ee5159bd6ac5be9",
"text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.",
"title": ""
}
] |
[
{
"docid": "691da5852aad20ace40be20bfeae3ea7",
"text": "Experimental manipulations of affect induced by a brief newspaper report of a tragic event produced a pervasive increase in subjects' estimates of the frequency of many risks and other undesirable events. Contrary to expectation, the effect was independent of the similarity between the report arid the estimated risk. An account of a fatal stabbing did not increase the frequency estimate of a closely related risk, homicide, more than the estimates of unrelated risks such as natural hazards. An account of a happy event that created positive affect produced a comparable global decrease in judged frequency of risks.",
"title": ""
},
{
"docid": "4eead577c1b3acee6c93a62aee8a6bb5",
"text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.",
"title": ""
},
{
"docid": "9f6429ac22b736bd988a4d6347d8475f",
"text": "The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the \"modelling view\" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or \"ontologies\") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose.",
"title": ""
},
{
"docid": "6a1a9c6cb2da06ee246af79fdeedbed9",
"text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review",
"title": ""
},
{
"docid": "089e1d2d96ae4ba94ac558b6cdccd510",
"text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.",
"title": ""
},
{
"docid": "e7c97ff0a949f70b79fb7d6dea057126",
"text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.",
"title": ""
},
{
"docid": "aa55e655c7fa8c86d189d03c01d5db87",
"text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.",
"title": ""
},
{
"docid": "8b971925c3a9a70b6c3eaffedf5a3985",
"text": "We consider the NP-complete problem of finding an enclosing rectangle of minimum area that will contain a given a set of rectangles. We present two different constraintsatisfaction formulations of this problem. The first searches a space of absolute placements of rectangles in the enclosing rectangle, while the other searches a space of relative placements between pairs of rectangles. Both approaches dramatically outperform previous approaches to optimal rectangle packing. For problems where the rectangle dimensions have low precision, such as small integers, absolute placement is generally more efficient, whereas for rectangles with high-precision dimensions, relative placement will be more effective. In two sets of experiments, we find both the smallest rectangles and squares that can contain the set of squares of size 1 × 1, 2 × 2, . . . ,N × N , for N up to 27. In addition, we solve an open problem dating to 1966, concerning packing the set of consecutive squares up to 24 × 24 in a square of size 70 × 70. Finally, we find the smallest enclosing rectangles that can contain a set of unoriented rectangles of size 1 × 2, 2 × 3, 3 × 4, . . . ,N × (N + 1), for N up to 25.",
"title": ""
},
{
"docid": "7ca6ea8592c0bd3a31108221975f9470",
"text": "BACKGROUND\nThe dermoscopic patterns of pigmented skin tumors are influenced by the body site.\n\n\nOBJECTIVE\nTo evaluate the clinical and dermoscopic features associated with pigmented vulvar lesions.\n\n\nMETHODS\nRetrospective analysis of clinical and dermoscopic images of vulvar lesions. The χ² test was used to test the association between clinical data and histopathological diagnosis.\n\n\nRESULTS\nA total of 42 (32.8%) melanocytic and 86 (67.2%) nonmelanocytic vulvar lesions were analyzed. Nevi significantly prevailed in younger women compared with melanomas and melanosis and exhibited most commonly a globular/cobblestone (51.3%) and a mixed (21.6%) pattern. Dermoscopically all melanomas showed a multicomponent pattern. Melanotic macules showed clinical overlapping features with melanoma, but their dermoscopic patterns differed significantly from those observed in melanomas.\n\n\nCONCLUSION\nThe diagnosis and management of pigmented vulvar lesions should be based on a good clinicodermoscopic correlation. Dermoscopy may be helpful in the differentiation of solitary melanotic macules from early melanoma.",
"title": ""
},
{
"docid": "1a161ce6c138d5351378637c6d94d722",
"text": "The domain-general learning mechanisms elicited in incidental learning situations are of potential interest in many research fields, including language acquisition, object knowledge formation and motor learning. They have been the focus of studies on implicit learning for nearly 40 years. Stemming from a different research tradition, studies on statistical learning carried out in the past 10 years after the seminal studies by Saffran and collaborators, appear to be closely related, and the similarity between the two approaches is strengthened further by their recent evolution. However, implicit learning and statistical learning research favor different interpretations, focusing on the formation of chunks and statistical computations, respectively. We examine these differing approaches and suggest that this divergence opens up a major theoretical challenge for future studies.",
"title": ""
},
{
"docid": "756ea86702a4314fa211afb23c4c63ac",
"text": "The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.",
"title": ""
},
{
"docid": "5033cc81abffc2b5a10635e87b025991",
"text": "We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.",
"title": ""
},
{
"docid": "627aee14031293785224efdb7bac69f0",
"text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.",
"title": ""
},
{
"docid": "09538bc92c8bf9818bf84e44024f087c",
"text": "An up-to-date review paper on automotive sensors is presented. Attention is focused on sensors used in production automotive systems. The primary sensor technologies in use today are reviewed and are classified according to their three major areas ofautomotive systems application–powertrain, chassis, and body. This subject is extensive. As described in this paper, for use in automotive systems, there are six types of rotational motion sensors, four types of pressure sensors, five types of position sensors, and three types of temperature sensors. Additionally, two types of mass air flow sensors, five types of exhaust gas oxygen sensors, one type of engine knock sensor, four types of linear acceleration sensors, four types of angular-rate sensors, four types of occupant comfort/convenience sensors, two types of near-distance obstacle detection sensors, four types of far-distance obstacle detection sensors, and and ten types of emerging, state-of the-art, sensors technologies are identified.",
"title": ""
},
{
"docid": "025076c60f680a6e7311f07b3027b13c",
"text": "The changing nature of warfare has seen a paradigm shift from the conventional to asymmetric, contactless warfare such as information and cyber warfare. Excessive dependence on information and communication technologies, cloud infrastructures, big data analytics, data-mining and automation in decision making poses grave threats to business and economy in adversarial environments. Adversarial machine learning is a fast growing area of research which studies the design of Machine Learning algorithms that are robust in adversarial environments. This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. We present privacy issues in these models and describe a cyber-warfare test-bed to test the effectiveness of the various attack-defence strategies and conclude with some open problems in this area of research.",
"title": ""
},
{
"docid": "83ec8e9791086bcb58427d43c6c777aa",
"text": "In this work we review the most important existing developments and future trends in the class of Parallel Genetic Algorithms (PGAs). PGAs are mainly subdivided into coarse and fine grain PGAs, the coarse grain models being the most popular ones. An exceptional characteristic of PGAs is that they are not just the parallel version of a sequential algorithm intended to provide speed gains. Instead, they represent a new kind of meta-heuristics of higher efficiency and efficacy thanks to their structured population and parallel execution. The good robustness of these algorithms on problems of high complexity has led to an increasing number of applications in the fields of artificial intelligence, numeric and combinatorial optimization, business, engineering, etc. We make a formalization of these algorithms, and present a timely and topic survey of their most important traditional and recent technical issues. Besides that, useful summaries on their main applications plus Internet pointers to important web sites are included in order to help new researchers to access this growing area.",
"title": ""
},
{
"docid": "5d1fbf1b9f0529652af8d28383ce9a34",
"text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.",
"title": ""
},
{
"docid": "4f186e992cd7d5eadb2c34c0f26f4416",
"text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …",
"title": ""
},
{
"docid": "04647771810ac62b27ee8da12833a02d",
"text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.",
"title": ""
},
{
"docid": "7abeef7b56ce98d6f96727ac60444bdb",
"text": "Up until recently, hypervisor-based virtualization platforms dominated the virtualization industry. However, container-based virtualization, an alternative to hypervisorbased virtualization, simplifies and fastens the deployment of virtual entities. Relevant research has already shown that container-based virtualization either performs equally or better than hypervisor-based virtualization in terms of performance in almost all cases. This research project investigates whether the power efficiency significantly differs on Xen, which is based on hypervisor virtualization, and Docker, which is based on container-based virtualization. The power efficiency is obtained by running synthetic applications and measuring the power usage on different hardware components. Rather than measuring the overall power of the system, or looking at empirical studies, hardware components such as CPU, memory and HDD will be measured internally by placing power sensors between the motherboard and circuits of each measured hardware component. This newly refined approach shows that both virtualization platforms behave roughly similar in IDLE state, when loading the memory and when performing sequential writes for the HDD. Contrarily, the results of CPU and sequential HDD reads show differences between the two virtualization platforms, where the performance of Xen is significantly weaker in terms of power efficiency.",
"title": ""
}
] |
scidocsrr
|
06ea5f632bb004ca1efb4af2d4d5e884
|
Smart Augmentation Learning an Optimal Data Augmentation Strategy
|
[
{
"docid": "8d5dd3f590dee87ea609278df3572f6e",
"text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.",
"title": ""
}
] |
[
{
"docid": "b0840d44b7ec95922eeed4ef71b338f9",
"text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.",
"title": ""
},
{
"docid": "a9de29e1d8062b4950e5ab3af6bea8df",
"text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.",
"title": ""
},
{
"docid": "8bc221213edc863f8cba6f9f5d9a9be0",
"text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.",
"title": ""
},
{
"docid": "d318f73ccfd1069acbf7e95596fb1028",
"text": "In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.",
"title": ""
},
{
"docid": "f6677bda56105cfaa932cdfdace764eb",
"text": "We construct a segmentation scheme that combines top-down with bottom-up processing. In the proposed scheme, segmentation and recognition are intertwined rather than proceeding in a serial manner. The top-down part applies stored knowledge about object shapes acquired through learning, whereas the bottom-up part creates a hierarchy of segmented regions based on uniformity criteria. Beginning with unsegmented training examples of class and non-class images, the algorithm constructs a bank of class-specific fragments and determines their figure-ground segmentation. This bank is then used to segment novel images in a top-down manner: the fragments are first used to recognize images containing class objects, and then to create a complete cover that best approximates these objects. The resulting segmentation is then integrated with bottom-up multi-scale grouping to better delineate the object boundaries. Our experiments, applied to a large set of four classes (horses, pedestrians, cars, faces), demonstrate segmentation results that surpass those achieved by previous top-down or bottom-up schemes. The main novel aspects of this work are the fragment learning phase, which efficiently learns the figure-ground labeling of segmentation fragments, even in training sets with high object and background variability; combining the top-down segmentation with bottom-up criteria to draw on their relative merits; and the use of segmentation to improve recognition.",
"title": ""
},
{
"docid": "292fe6afb5cb4c6b2694033d57b9012a",
"text": "the goal of this paper is to survey access control models, protocols and frameworks in IoT. We provide a literature overview and discuss in a qualitative way the most relevant IoT related-projects over recent years.",
"title": ""
},
{
"docid": "944989a04f2053153c75bcc7533e4e93",
"text": "Occupational burnout can have serious implications on productivity, nurses'health, service usage, and health care costs. This study examined the effect of burnout on nurses' mental and physical health outcomes and job retention. Randomly selected Canadian nephrology nurses completed surveys consisting of the Maslach Burnout Inventory and the Pressure Management Indicator. The nurses also completed questions related to job retention. After controlling for age and years of nephrology nursing experience, the multivariate results demonstrated that almost 40% of mental health symptoms experienced by nephrology nurses could be explained by burnout and 27.5% of physical symptoms could be explained by burnout. Twenty-three per cent of the sample had plans to leave their current position and retention was significantly associated with burnout, mental, and physical symptoms. Organizational strategies aimed at reducing perceptions of burnout are important, as a means to keep nurses healthy and working to their fullest potential.",
"title": ""
},
{
"docid": "410d4b0eb8c60517506b0d451cf288ba",
"text": "Prepositional phrases (PPs) express crucial information that knowledge base construction methods need to extract. However, PPs are a major source of syntactic ambiguity and still pose problems in parsing. We present a method for resolving ambiguities arising from PPs, making extensive use of semantic knowledge from various resources. As training data, we use both labeled and unlabeled data, utilizing an expectation maximization algorithm for parameter estimation. Experiments show that our method yields improvements over existing methods including a state of the art dependency parser.",
"title": ""
},
{
"docid": "a4099a526548c6d00a91ea21b9f2291d",
"text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.",
"title": ""
},
{
"docid": "d7624f0fe57b0022a81587b0f2edf755",
"text": "In a recent press release Joseph A. Califano, Jr., Chairman and President of the National Center on Addiction and Substance Abuse at Columbia University called for a major shift in American attitudes about substance abuse and addiction and a top to bottom overhaul in the nation's healthcare, criminal justice, social service, and eduction systems to curtail the rise in illegal drug use and other substance abuse. Califano, in 2005, also noted that while America has been congratulating itself on curbing increases in alcohol and illicit drug use and in the decline in teen smoking, abuse and addition of controlled prescription drugs-opioids, central nervous system depressants and stimulants-have been stealthily, but sharply rising. All the statistics continue to show that prescription drug abuse is escalating with increasing emergency department visits and unintentional deaths due to prescription controlled substances. While the problem of drug prescriptions for controlled substances continues to soar, so are the arguments of undertreatment of pain. The present state of affairs show that there were 6.4 million or 2.6% Americans using prescription-type psychotherapeutic drugs nonmedically in the past month. Of these, 4.7 million used pain relievers. Current nonmedical use of prescription-type drugs among young adults aged 18-25 increased from 5.4% in 2002 to 6.3% in 2005. The past year, nonmedical use of psychotherapeutic drugs has increased to 6.2% in the population of 12 years or older with 15.172 million persons, second only to marijuana use and three times the use of cocaine. Parallel to opioid supply and nonmedical prescription drug use, the epidemic of medical drug use is also escalating with Americans using 80% of world's supply of all opioids and 99% of hydrocodone. Opioids are used extensively despite a lack of evidence of their effectiveness in improving pain or functional status with potential side effects of hyperalgesia, negative hormonal and immune effects, addiction and abuse. The multiple reasons for continued escalation of prescription drug abuse and overuse are lack of education among all segments including physicians, pharmacists, and the public; ineffective and incoherent prescription monitoring programs with lack of funding for a national prescription monitoring program NASPER; and a reactive approach on behalf of numerous agencies. This review focuses on the problem of prescription drug abuse with a discussion of facts and fallacies, along with proposed solutions.",
"title": ""
},
{
"docid": "cb7a9b816fc1b83670cb9fb377974e5d",
"text": "BACKGROUND\nCare attendants constitute the main workforce in nursing homes, but their heavy workload, low autonomy, and indefinite responsibility result in high levels of stress and may affect quality of care. However, few studies have focused of this problem.\n\n\nOBJECTIVES\nThe aim of this study was to examine work-related stress and associated factors that affect care attendants in nursing homes and to offer suggestions for how management can alleviate these problems in care facilities.\n\n\nMETHODS\nWe recruited participants from nine nursing homes with 50 or more beds located in middle Taiwan; 110 care attendants completed the questionnaire. The work stress scale for the care attendants was validated and achieved good reliability (Cronbach's alpha=0.93). We also conducted exploratory factor analysis.\n\n\nRESULTS\nSix factors were extracted from the work stress scale: insufficient ability, stressful reactions, heavy workload, trouble in care work, poor management, and working time problems. The explained variance achieved 64.96%. Factors related to higher work stress included working in a hospital-based nursing home, having a fixed schedule, night work, feeling burden, inconvenient facility, less enthusiasm, and self-rated higher stress.\n\n\nCONCLUSION\nWork stress for care attendants in nursing homes is related to human resource management and quality of care. We suggest potential management strategies to alleviate work stress for these workers.",
"title": ""
},
{
"docid": "22ddd01d6658567ef5417829ecfe1104",
"text": "Recently electrocorticography (ECoG) has emerged as a potential tool for Brain Computer Interfacing applications. In this paper, a continuous wavelet transform (CWT) based method is proposed for classifying ECoG motor imagery signals corresponding to left pinky and tongue movement. The total experiment is carried out with the publicly available benchmark BCI competition III, data set I. The L2 norms of the CWT coefficients obtained from ECoG signals are shown to be separable for the two classes of motor imagery signals. Then the L2 norm based features are subjected to principal component analysis, yielding a feature set with lower dimension. Among various types of classifiers used, support vector machine based classifiers have been shown to provide a good accuracy of 92% which is shown to be better than several existing techniques. In addition, unlike most of the existing methods, our proposed method involves no pre-processing and thus can have better potential for practical implementation while requiring much lower computational time in extracting the features.",
"title": ""
},
{
"docid": "685faa54046bcd70e21a7003cb1182e2",
"text": "We analyze to what extent the random SAT and Max-SAT problems differ in their properties. Our findings suggest that for random k-CNF with ratio in a certain range, Max-SAT can be solved by any SAT algorithm with subexponential slowdown, while for formulae with ratios greater than some constant, algorithms under the random walk framework require substantially different heuristics. In light of these results, we propose a novel probabilistic approach for random Max-SAT called ProMS. Experimental results illustrate that ProMS outperforms many state-of-the-art local search solvers on random Max-SAT benchmarks.",
"title": ""
},
{
"docid": "e493bbcf5f2b561757ca795ab6bb1099",
"text": "As a spatio-temporal data-management problem, taxi ridesharing has received a lot of attention recently in the database literature. The broader scientific community, and the commercial world have also addressed the issue through services such as UberPool and Lyftline. The issues addressed have been efficient matching of passengers and taxis, fares, and savings from ridesharing. However, ridesharing fairness has not been addressed so far. Ridesharing fairness is a new problem that we formally define in this paper. We also propose a method of combining the benefits of fair and optimal ridesharing, and of efficiently executing fair and optimal ridesharing queries.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk",
"title": ""
},
{
"docid": "23e370c699f5bc7463eb8e401af47c50",
"text": "Periodicity mining is used for predicting trends in time series data. Discovering the rate at which the time series is periodic has always been an obstacle for fully automated periodicity mining. Existing periodicity mining algorithms assume that the periodicity, rate (or simply the period) is user-specified. This assumption is a considerable limitation, especially in time series data where the period is not known a priori. In this paper, we address the problem of detecting the periodicity rate of a time series database. Two types of periodicities are defined, and a scalable, computationally efficient algorithm is proposed for each type. The algorithms perform in O(n log n) time for a time series of length n. Moreover, the proposed algorithms are extended in order to discover the periodic patterns of unknown periods at the same time without affecting the time complexity. Experimental results show that the proposed algorithms are highly accurate with respect to the discovered periodicity rates and periodic patterns. Real-data experiments demonstrate the practicality of the discovered periodic patterns.",
"title": ""
},
{
"docid": "8bd510eecc82eee91ecd0b4650da28ed",
"text": "BACKGROUND AND OBJECTIVES\nLow-intensity laser therapy (LILT) has been studied in many fields of dentistry, but to our knowledge, this is the first time that its effects on orthodontic movement velocity in humans are investigated.\n\n\nSTUDY DESIGN/PATIENTS AND METHODS\nEleven patients were recruited for this 2-month study. One half of the upper arcade was considered control group (CG) and received mechanical activation of the canine teeth every 30 days. The opposite half received the same mechanical activation and was also irradiated with a diode laser emitting light at 780 nm, during 10 seconds at 20 mW, 5 J/cm2, on 4 days of each month. Data of the biometrical progress of both groups were statistically compared.\n\n\nRESULTS\nAll patients showed significant higher acceleration of the retraction of canines on the side treated with LILT when compared to the control.\n\n\nCONCLUSIONS\nOur findings suggest that LILT does accelerate human teeth movement and could therefore considerably shorten the whole treatment duration.",
"title": ""
},
{
"docid": "0a0038a5c68f0d93287dcece9581e570",
"text": "We use Multi-layer Perceptron and propose a hybrid model of fundamental and technical analysis by utilizing stock prices (from 2012–06 to 2017–12) and financial ratios of Technology companies listed on Nasdaq. Our model uses data discretization and feature selection preprocesses. The best results are obtained through topology optimizations using a three-hidden layer MLP. We examine the predictability of our hybrid model through a training/test split and cross-validation. It is found that the hybrid model successfully predicts the future stock movements. Our model results in the greatest average directional accuracy (65.87%) compared to the results obtained from the fundamental and technical analysis in isolation. The numerical results provide enough evidence to conclude that the market is not perfectly efficient.",
"title": ""
},
{
"docid": "736f8a02bbe5ab9a5b9dd5026430e05c",
"text": "We present a novel approach for interactive navigation and planning of multiple agents in crowded scenes with moving obstacles. Our formulation uses a precomputed roadmap that provides macroscopic, global connectivity for wayfinding and combines it with fast and localized navigation for each agent. At runtime, each agent senses the environment independently and computes a collision-free path based on an extended \"Velocity Obstacles\" concept. Furthermore, our algorithm ensures that each agent exhibits no oscillatory behaviors. We have tested the performance of our algorithm in several challenging scenarios with a high density of virtual agents. In practice, the algorithm performance scales almost linearly with the number of agents and can run at interactive rates on multi-core processors.",
"title": ""
},
{
"docid": "459dc066960760010b1157e4929d09f8",
"text": "A dynamical extension that makes possible the integration of a kinematic controller and a torque controller for nonholonomic mobile robots is presented. A combined kinematic/torque control law is developed using backstepping, and asymptotic stability is guaranteed by Lyapunov theory. Moreover, this control algorithm can be applied to the three basic nonholonomic navigation problems: tracking a reference trajectory, path following, and stabilization about a desired posture. The result is a general structure for controlling a mobile robot that can accommodate different control techniques, ranging from a conventional computed-torque controller, when all dynamics are known, to robust-adaptive controllers if this is not the case. A robust-adaptive controller based on neural networks (NNs) is proposed in this work. The NN controller can deal with unmodeled bounded disturbances and/or unstructured unmodeled dynamics in the vehicle. On-line NN weight tuning algorithms that do not require off-line learning yet guarantee small tracking errors and bounded control signals are utilized. 1997 John Wiley & Sons, Inc.",
"title": ""
}
] |
scidocsrr
|
ba11ed0846156748df082af835937cd6
|
Do GANs actually learn the distribution? An empirical study
|
[
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "740a83306dddd3123a910acbbd01ff80",
"text": "We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that target arbitrary f -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.",
"title": ""
},
{
"docid": "6573629e918822c0928e8cf49f20752c",
"text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"title": ""
}
] |
[
{
"docid": "e9e11d96e26708c380362847094113db",
"text": "Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.",
"title": ""
},
{
"docid": "e830e918184c9d127058778306d7b7fe",
"text": "Tragically, an estimated 42,000 Americans died by suicide in 2015, each one deeply affecting friends and family. Very little data and information is available about people who attempt to take their life, and thus scientific exploration has been hampered. We examine data from Twitter users who have attempted to take their life and provide an exploratory analysis of patterns in language and emotions around their attempt. We also show differences between those who have attempted to take their life and matched controls. We find quantifiable signals of suicide attempts in the language of social media data and estimate performance of a simple machine learning classifier with these signals as a non-invasive analysis in a screening process.",
"title": ""
},
{
"docid": "eac2100a0fa189aecc148b70e113a0b0",
"text": "Zolt ́n Dörnyei Language Teaching / Volume 31 / Issue 03 / July 1998, pp 117 135 DOI: 10.1017/S026144480001315X, Published online: 12 June 2009 Link to this article: http://journals.cambridge.org/abstract_S026144480001315X How to cite this article: Zolt ́n Dörnyei (1998). Motivation in second and foreign language learning. Language Teaching, 31, pp 117135 doi:10.1017/S026144480001315X Request Permissions : Click here",
"title": ""
},
{
"docid": "11e2ec2aab62ba8380e82a18d3fcb3d8",
"text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.",
"title": ""
},
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "777d4e55f3f0bbb0544130931006b237",
"text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.",
"title": ""
},
{
"docid": "95d6189ba97f15c7cc33028f13f8789f",
"text": "This paper presents a new Bayesian nonnegative matrix factorization (NMF) for monaural source separation. Using this approach, the reconstruction error based on NMF is represented by a Poisson distribution, and the NMF parameters, consisting of the basis and weight matrices, are characterized by the exponential priors. A variational Bayesian inference procedure is developed to learn variational parameters and model parameters. The randomness in separation process is faithfully represented so that the system robustness to model variations in heterogeneous environments could be achieved. Importantly, the exponential prior parameters are used to impose sparseness in basis representation. The variational lower bound of log marginal likelihood is adopted as the objective to control model complexity. The dependencies of variational objective on model parameters are fully characterized in the derived closed-form solution. A clustering algorithm is performed to find the groups of bases for unsupervised source separation. The experiments on speech/music separation and singing voice separation show that the proposed Bayesian NMF (BNMF) with adaptive basis representation outperforms the NMF with fixed number of bases and the other BNMFs in terms of signal-to-distortion ratio and the global normalized source to distortion ratio.",
"title": ""
},
{
"docid": "aeb0b7d924713a49d649d86d115d83c4",
"text": "This paper describes the realization of a wireless oxygen saturation and heart rate system for patient monitoring in a limited area. The proposed system will allow the automatic remote monitoring in hospitals, at home, at work, in real time, of persons with chronic illness, of elderly people, and of those having high medical risk. The system can be used for long-time continuous patient monitoring, as medical assistance of a chronic condition, as part of a diagnostic procedure, or recovery from an acute event. The blood oxygen saturation level (SpO2) and heart rate (HR) are continuously measured using commercially available pulse oximeters and then transferred to a central monitoring station via a wireless sensor network (WSN). The central monitoring station runs a patient monitor application that receives the SpO2 and HR from WSN, processes these values and activates the alarms when the results exceed the preset limits. A user-friendly Graphical User Interface was developed for the patient monitor application to display the received measurements from all monitored patients. A prototype of the system has been developed, implemented and tested.",
"title": ""
},
{
"docid": "4608c8ca2cf58ca9388c25bb590a71df",
"text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.",
"title": ""
},
{
"docid": "b65633464301e43f16bd99341872d766",
"text": "Allen (2001) proposed the “Getting Things Done” (GTD) method for personal productivity enhancement, and reduction of the stress caused by information overload. This paper argues that recent insights in psychology and cognitive science support and extend GTD’s recommendations. We first summarize GTD with the help of a flowchart. We then review the theories of situated, embodied and distributed cognition that purport to explain how the brain processes information and plans actions in the real world. The conclusion is that the brain heavily relies on the environment, to function as an external memory, a trigger for actions, and a source of affordances, disturbances and feedback. We then show how these principles are practically implemented in GTD, with its focus on organizing tasks into “actionable” external memories, and on opportunistic, situation-dependent execution. Finally, we propose an extension of GTD to support collaborative work, inspired by the concept of stigmergy.",
"title": ""
},
{
"docid": "22418c06e09887d5994aee27ea05691d",
"text": "About a decade ago, psychology of the arts started to gain momentum owing to a number of drives: technological progress improved the conditions under which art could be studied in the laboratory, neuroscience discovered the arts as an area of interest, and new theories offered a more comprehensive look at aesthetic experiences. Ten years ago, Leder, Belke, Oeberst, and Augustin (2004) proposed a descriptive information-processing model of the components that integrate an aesthetic episode. This theory offered explanations for modern art's large number of individualized styles, innovativeness, and for the diverse aesthetic experiences it can stimulate. In addition, it described how information is processed over the time course of an aesthetic episode, within and over perceptual, cognitive and emotional components. Here, we review the current state of the model, and its relation to the major topics in empirical aesthetics today, including the nature of aesthetic emotions, the role of context, and the neural and evolutionary foundations of art and aesthetics.",
"title": ""
},
{
"docid": "d09dddd8a678370375c30dd14b3f2482",
"text": "Deep learning on graphs and in particular, graph convolutional neural networks, have recently attracted significant attention in the machine learning community. Many of such techniques explore the analogy between the graph Laplacian eigenvectors and the classical Fourier basis, allowing to formulate the convolution as a multiplication in the spectral domain. One of the key drawback of spectral CNNs is their explicit assumption of an undirected graph, leading to a symmetric Laplacian matrix with orthogonal eigendecomposition. In this work we propose MotifNet, a graph CNN capable of dealing with directed graphs by exploiting local graph motifs. We present experimental evidence showing the advantage of our approach on real data.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "14360f8801fcff22b7a0059b322ebf9a",
"text": "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.",
"title": ""
},
{
"docid": "955feaf32277aa431473554514e81b60",
"text": "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.",
"title": ""
},
{
"docid": "16338883787b5a1ff4df2bb5e9d4f21a",
"text": "The next generations of large-scale data-centers and supercomputers demand optical interconnects to migrate to 400G and beyond. Microring modulators in silicon-photonics VLSI chips are promising devices to meet this demand due to their energy efficiency and compatibility with dense wavelength division multiplexed chip-to-chip optical I/O. Higher order pulse amplitude modulation (PAM) schemes can be exploited to mitigate their fundamental energy–bandwidth tradeoff at the system level for high data rates. In this paper, we propose an optical digital-to-analog converter based on a segmented microring resonator, capable of operating at 20 GS/s with improved linearity over conventional optical multi-level generators that can be used in a variety of applications such as optical arbitrary waveform generators and PAM transmitters. Using this technique, we demonstrate a PAM-4 transmitter that directly converts the digital data into optical levels in a commercially available 45-nm SOI CMOS process. We achieved 40-Gb/s PAM-4 transmission at 42-fJ/b modulator and driver energies, and 685-fJ/b total transmitter energy efficiency with an area bandwidth density of 0.67 Tb/s/mm2. The transmitter incorporates a thermal tuning feedback loop to address the thermal and process variations of microrings’ resonance wavelength. This scheme is suitable for system-on-chip applications with a large number of I/O links, such as switches and general-purpose and specialized processors in large-scale computing and storage systems.",
"title": ""
},
{
"docid": "dec066c088a120560a6814287a2be83a",
"text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that usually results in multiple complications. Early identification of individuals at risk for complications after being diagnosed with T2DM is of significant clinical value. In this paper, we present a new data-driven predictive approach to predict when a patient will develop complications after the initial T2DM diagnosis. We propose a novel survival analysis method to model the time-to-event of T2DM complications designed to simultaneously achieve two important metrics: 1) accurate prediction of event times, and 2) good ranking of the relative risks of two patients. Moreover, to better capture the correlations of time-to-events of the multiple complications, we further develop a multi-task version of the survival model. To assess the performance of these approaches, we perform extensive experiments on patient level data extracted from a large electronic health record claims database. The results show that our new proposed survival analysis approach consistently outperforms traditional survival models and demonstrate the effectiveness of the multi-task framework over modeling each complication independently.",
"title": ""
},
{
"docid": "ef92244350e267d3b5b9251d496e0ee2",
"text": "A review of recent advances in power wafer level electronic packaging is presented based on the development of power device integration. The paper covers in more detail how advances in both semiconductor content and power advanced wafer level package design and materials have co-enabled significant advances in power device capability during recent years. Extrapolating the same trends in representative areas for the remainder of the decade serves to highlight where further improvement in materials and techniques can drive continued enhancements in usability, efficiency, reliability and overall cost of power semiconductor solutions. Along with next generation wafer level power packaging development, the role of modeling is a key to assure successful package design. An overview of the power package modeling is presented. Challenges of wafer level power semiconductor packaging and modeling in both next generation design and assembly processes are presented and discussed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "780095276d7ac3cae1b95b7a1ceee8b3",
"text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.",
"title": ""
},
{
"docid": "a57e470ad16c025f6b0aae99de25f498",
"text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.",
"title": ""
}
] |
scidocsrr
|
e495aa6f56788312437436107f900cc6
|
Image Based Characterization of Formal and Informal Neighborhoods in an Urban Landscape
|
[
{
"docid": "31122e142e02b7e3b99c52c8f257a92e",
"text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "b51d531c2ff106124f96a4287e466b90",
"text": "Detecting buildings from very high resolution (VHR) aerial and satellite images is extremely useful in map making, urban planning, and land use analysis. Although it is possible to manually locate buildings from these VHR images, this operation may not be robust and fast. Therefore, automated systems to detect buildings from VHR aerial and satellite images are needed. Unfortunately, such systems must cope with major problems. First, buildings have diverse characteristics, and their appearance (illumination, viewing angle, etc.) is uncontrolled in these images. Second, buildings in urban areas are generally dense and complex. It is hard to detect separate buildings from them. To overcome these difficulties, we propose a novel building detection method using local feature vectors and a probabilistic framework. We first introduce four different local feature vector extraction methods. Extracted local feature vectors serve as observations of the probability density function (pdf) to be estimated. Using a variable-kernel density estimation method, we estimate the corresponding pdf. In other words, we represent building locations (to be detected) in the image as joint random variables and estimate their pdf. Using the modes of the estimated density, as well as other probabilistic properties, we detect building locations in the image. We also introduce data and decision fusion methods based on our probabilistic framework to detect building locations. We pick certain crops of VHR panchromatic aerial and Ikonos satellite images to test our method. We assume that these crops are detected using our previous urban region detection method. Our test images are acquired by two different sensors, and they have different spatial resolutions. Also, buildings in these images have diverse characteristics. Therefore, we can test our methods on a diverse data set. Extensive tests indicate that our method can be used to automatically detect buildings in a robust and fast manner in Ikonos satellite and our aerial images.",
"title": ""
}
] |
[
{
"docid": "13bfce7105cab1e4ea01fe94d04bcb97",
"text": "Recent years have seen a steady rise in the incidence of cutaneous malignant melanoma worldwide. Although it is now appreciated that the key to understanding the process by which melanocytes are transformed into malignant melanoma lies in the interplay between genetic factors and the ultraviolet (UV) spectrum of sunlight, the nature of this relation has remained obscure. Recently, prospects for elucidating the molecular mechanisms underlying such gene–environment interactions have brightened considerably through the development of UV-responsive experimental animal models of melanoma. Genetically engineered mice and human skin xenografts constitute novel platforms upon which to build studies designed to elucidate the pathogenesis of UV-induced melanomagenesis. The future refinement of these in vivo models should provide a wealth of information on the cellular and genetic targets of UV, the pathways responsible for the repair of UV-induced DNA damage, and the molecular interactions between melanocytes and other skin cells in response to UV. It is anticipated that exploitation of these model systems will contribute significantly toward the development of effective approaches to the prevention and treatment of melanoma.",
"title": ""
},
{
"docid": "16e90e4dbf5597ce6721a6177344db15",
"text": "BACKGROUND\nScoping reviews are used to identify knowledge gaps, set research agendas, and identify implications for decision-making. The conduct and reporting of scoping reviews is inconsistent in the literature. We conducted a scoping review to identify: papers that utilized and/or described scoping review methods; guidelines for reporting scoping reviews; and studies that assessed the quality of reporting of scoping reviews.\n\n\nMETHODS\nWe searched nine electronic databases for published and unpublished literature scoping review papers, scoping review methodology, and reporting guidance for scoping reviews. Two independent reviewers screened citations for inclusion. Data abstraction was performed by one reviewer and verified by a second reviewer. Quantitative (e.g. frequencies of methods) and qualitative (i.e. content analysis of the methods) syntheses were conducted.\n\n\nRESULTS\nAfter searching 1525 citations and 874 full-text papers, 516 articles were included, of which 494 were scoping reviews. The 494 scoping reviews were disseminated between 1999 and 2014, with 45% published after 2012. Most of the scoping reviews were conducted in North America (53%) or Europe (38%), and reported a public source of funding (64%). The number of studies included in the scoping reviews ranged from 1 to 2600 (mean of 118). Using the Joanna Briggs Institute methodology guidance for scoping reviews, only 13% of the scoping reviews reported the use of a protocol, 36% used two reviewers for selecting citations for inclusion, 29% used two reviewers for full-text screening, 30% used two reviewers for data charting, and 43% used a pre-defined charting form. In most cases, the results of the scoping review were used to identify evidence gaps (85%), provide recommendations for future research (84%), or identify strengths and limitations (69%). We did not identify any guidelines for reporting scoping reviews or studies that assessed the quality of scoping review reporting.\n\n\nCONCLUSION\nThe number of scoping reviews conducted per year has steadily increased since 2012. Scoping reviews are used to inform research agendas and identify implications for policy or practice. As such, improvements in reporting and conduct are imperative. Further research on scoping review methodology is warranted, and in particular, there is need for a guideline to standardize reporting.",
"title": ""
},
{
"docid": "6b07c3fb97ab3a1001cf3753adb6754f",
"text": "• Starting with the fact that school education has failed to become education for critical thinking and that one of the reasons for that could be in how education for critical thinking is conceptualised, this paper presents: (1) an analysis of the predominant approach to education for critical thinking through the implementation of special programs and methods, and (2) an attempt to establish different approaches to education for critical thinking. The overview and analysis of understanding education for developing critical thinking as the implementation of special programs reveal that it is perceived as a decontextualised activity, reduced to practicing individual intellectual skills. Foundations for a different approach, which could be characterised as the ‘education for critical competencies’, are found in ideas of critical pedagogy and open curriculum theory. This approach differs from the predominant approach in terms of how the nature and purpose of critical thinking and education for critical thinking are understood. In the approach of education for critical competencies, it is not sufficient to introduce special programs and methods for the development of critical thinking to the existing educational system. This approach emphasises the need to question and reconstruct the status, role, and power of pupils and teachers in the teaching process, but also in the process of curriculum development.",
"title": ""
},
{
"docid": "bf98d3c8d9bea339fb057bc1c177e9e0",
"text": "Inactivation of parasites in food by microwave treatment may vary due to differences in the characteristics of microwave ovens and food properties. Microwave treatment in standard domestic ovens results in hot and cold spots, and the microwaves do not penetrate all areas of the samples depending on the thickness, which makes it difficult to compare microwave with conventional heat treatments. The viability of Anisakis simplex (isolated larvae and infected fish muscle) heated in a microwave oven with precise temperature control was compared with that of larvae heated in a water bath to investigate any additional effect of the microwaves. At a given temperature, less time was required to kill the larvae by microwaves than by heated water. Microwave treatment killed A. simplex larvae faster than did conventional cooking when the microwaves fully penetrated the samples and resulted in fewer changes in the fish muscle. However, the heat-stable allergen Ani s 4 was detected by immunohistochemistry in the fish muscle after both heat treatments, even at 70°C, suggesting that Ani s 4 allergens were released from the larvae into the surrounding tissue and that the tissues retained their allergenicity even after the larvae were killed by both heat treatments. Thus, microwave cooking will not render fish safe for individuals already sensitized to A. simplex heat-resistant allergens.",
"title": ""
},
{
"docid": "1563d5d8c9287c85f7de0844d4064d5a",
"text": "Herein, design and manufacturing of an X-band pyramid horn antenna using 3-D printer is studied with its experimental results. X-band is used for the military purposes with the marine and satellite technology based on the geographic discovery. Horn antennas are especially very preferable in these applications since they can be built easily at the different types depending on their utilizations and provide low voltage standing wave ratios. This work is focused on a pyramid horn antenna design and its manufacturing method with 3-D printer technology. The measurement results of the 3-D printed antenna are also compared with the simulation results.",
"title": ""
},
{
"docid": "8d6eeece3ef74afc5f33b984869d5a22",
"text": "In order to mitigate air pollution problems caused mainly by the excessive emission of carbon dioxide, in 2012, the South Korean government decided to introduce a renewable portfolio standards (RPS) program that requires electricity providers to gradually increase their production of renewable energy. In order to meet the government’s target through this RPS program, electricity providers in Korea have looked to various types of new and renewable energy resources, such as biomass, wind, and solar. Recently, floating photovoltaic (PV) systems have attracted increased interest in Korea as a desirable renewable energy alternative. This paper provides a discussion of recent research into floating PV systems and the installation of floating PV power plants in Korea from 2009 to 2014. To date, thirteen floating PV power plants have been installed in Korea, and several plans are underway by many different organizations, including government-funded companies, to install more floating PV power plants with various generation capacities. These building trends are expected to continue due to the Korean government’s RPS program.",
"title": ""
},
{
"docid": "99e89314a069a059e1f7214148b150e4",
"text": "Wegener’s granulomatosis (WG) is an autoimmune disease, which particularly affects the upper respiratory pathways, lungs and kidney. Oral mucosal involvement presents in around 5%--10% of cases and may be the first disease symptom. Predominant manifestation is granulomatous gingivitis erythematous papules; mucosal necrosis and non-specific ulcers with or without impact on adjacent structures. Clinically speaking, the most characteristic lesion presents as a gingival hyperplasia of the gum, with hyperaemia and petechias on its surface which bleed when touched. Due to its appearance, it has been called ‘‘Strawberry gingiva’’. The following is a clinical case in which the granulomatous strawberry gingivitis was the first sign of WG.",
"title": ""
},
{
"docid": "b5ac99810439becac6686e6cad6c0b2c",
"text": "The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish inand out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.",
"title": ""
},
{
"docid": "fa7416bd48a3f4b5edbbcefadc74f72d",
"text": "This paper introduces a meaning representation for spoken language understanding. The Alexa meaning representation language (AMRL), unlike previous approaches, which factor spoken utterances into domains, provides a common representation for how people communicate in spoken language. AMRL is a rooted graph, links to a large-scale ontology, supports cross-domain queries, finegrained types, complex utterances and composition. A spoken language dataset has been collected for Alexa, which contains ∼ 20k examples across eight domains. A version of this meaning representation was released to developers at a trade show in 2016.",
"title": ""
},
{
"docid": "a00d4e095efe3b9af7d3488c698f8b35",
"text": "The eHealth trend has spread globally. Internet of Things (IoT) devices for medical service and pervasive Personal Health Information (PHI) systems play important roles in the eHealth environment. A cloud-based PHI system appears promising but raises privacy and information security concerns. We propose a cloud-based fine-grained health information access control framework for lightweight IoT devices with data dynamics auditing and attribute revocation functions. Only symmetric cryptography is required for IoT devices, such as wireless body sensors. A variant of ciphertext-policy attribute-based encryption, dual encryption, and Merkle hash trees are used to support fine-grained access control, efficient dynamic data auditing, batch auditing, and attribute revocation. Moreover, the proposed scheme also defines and handles the cloud reciprocity problem wherein cloud service providers can help each other avoid fines resulting from data loss. Security analysis and performance comparisons show that the proposed scheme is an excellent candidate for a cloud-based PHI system.",
"title": ""
},
{
"docid": "d196fad248811b1d3f7f8d4d11d3b83b",
"text": "Recent developments in telecommunications have allowed drawing new paradigms, including the Internet of Everything, to provide services by the interconnection of different physical devices enabling the exchange of data to enrich and automate people’s daily activities; and Fog computing, which is an extension of the well-known Cloud computing, bringing tasks to the edge of the network exploiting characteristics such as lower latency, mobility support, and location awareness. Combining these paradigms opens a new set of possibilities for innovative services and applications; however, it also brings a new complex scenario that must be efficiently managed to properly fulfill the needs of the users. In this scenario, the Fog Orchestrator component is the key to coordinate the services in the middle of Cloud computing and Internet of Everything. In this paper, key challenges in the development of the Fog Orchestrator to support the Internet of Everything are identified, including how they affect the tasks that a Fog service Orchestrator should perform. Furthermore, different service Orchestrator architectures for the Fog are explored and analyzed in order to identify how the previously listed challenges are being tackled. Finally, a discussion about the open challenges, technological directions, and future of the research on this subject is presented.",
"title": ""
},
{
"docid": "34d7f848427052a1fc5f565a24f628ec",
"text": "This is the solutions manual (web-edition) for the book Pattern Recognition and Machine Learning (PRML; published by Springer in 2006). It contains solutions to the www exercises. This release was created September 8, 2009. Future releases with corrections to errors will be published on the PRML web-site (see below). The authors would like to express their gratitude to the various people who have provided feedback on earlier releases of this document. In particular, the \" Bishop Reading Group \" , held in the Visual Geometry Group at the University of Oxford provided valuable comments and suggestions. The authors welcome all comments, questions and suggestions about the solutions as well as reports on (potential) errors in text or formulae in this document; please send any such feedback to",
"title": ""
},
{
"docid": "7c0b40fc536d22612e38a68b194f7784",
"text": "Clothing may both cause death and contribute to ongoing lethal mechanisms by a variety of quite disparate mechanisms. The manner of death may be accidental, suicidal, or homicidal. Accidental deaths include burning from clothing catching on fire, strangulation from clothing tangling in vehicle wheels or exposed machinery, and drowning. Entanglement of clothing in machinery may also result in significant injuries, which are not uncommon in farming communities. Excessive clothing, or its absence, may significantly alter body temperature, and hanging from clothing is a feature in the young or in mentally or physically handicapped adults, or in adults who are intoxicated with alcohol or drugs. In previous years, potentially lethal amounts of arsenic were present in clothing and accessories from dyes. Clothing may also be used to form nooses or to pad ropes in suicides and may be used in cases of strangulation, suffocation, or choking in homicides. The contribution of clothing to mortality has changed over the years with changes in fashions and in manufacturing techniques. Geographical differences in clothing-related deaths persist because of variable social and cultural practices and legislative frameworks.",
"title": ""
},
{
"docid": "6eca055c09966b85aca19012d9967ee0",
"text": "The Penn Treebank, in its eight years of operation (1989-1996), produced approximately 7 million words of part-of-speech tagged text, 3 million words of skeletally parsed text, over 2 million words of text parsed for predicateargument structure, and 1.6 million words of transcribed spoken text annotated for speech disfluencies. This paper describes the design of the three annotation schemes used by the Treebank: POS tagging, syntactic bracketing, and disfluency annotation and the methodology employed in production. All available Penn Treebank materials are distributed by the Linguistic Data Consortium http://www.ldc.upenn.edu.",
"title": ""
},
{
"docid": "42961b66e41a155edb74cc4ab5493c9c",
"text": "OBJECTIVE\nTo determine the preventive effect of manual lymph drainage on the development of lymphoedema related to breast cancer.\n\n\nDESIGN\nRandomised single blinded controlled trial.\n\n\nSETTING\nUniversity Hospitals Leuven, Leuven, Belgium.\n\n\nPARTICIPANTS\n160 consecutive patients with breast cancer and unilateral axillary lymph node dissection. The randomisation was stratified for body mass index (BMI) and axillary irradiation and treatment allocation was concealed. Randomisation was done independently from recruitment and treatment. Baseline characteristics were comparable between the groups.\n\n\nINTERVENTION\nFor six months the intervention group (n = 79) performed a treatment programme consisting of guidelines about the prevention of lymphoedema, exercise therapy, and manual lymph drainage. The control group (n = 81) performed the same programme without manual lymph drainage.\n\n\nMAIN OUTCOME MEASURES\nCumulative incidence of arm lymphoedema and time to develop arm lymphoedema, defined as an increase in arm volume of 200 mL or more in the value before surgery.\n\n\nRESULTS\nFour patients in the intervention group and two in the control group were lost to follow-up. At 12 months after surgery, the cumulative incidence rate for arm lymphoedema was comparable between the intervention group (24%) and control group (19%) (odds ratio 1.3, 95% confidence interval 0.6 to 2.9; P = 0.45). The time to develop arm lymphoedema was comparable between the two group during the first year after surgery (hazard ratio 1.3, 0.6 to 2.5; P = 0.49). The sample size calculation was based on a presumed odds ratio of 0.3, which is not included in the 95% confidence interval. This odds ratio was calculated as (presumed cumulative incidence of lymphoedema in intervention group/presumed cumulative incidence of no lymphoedema in intervention group)×(presumed cumulative incidence of no lymphoedema in control group/presumed cumulative incidence of lymphoedema in control group) or (10/90)×(70/30).\n\n\nCONCLUSION\nManual lymph drainage in addition to guidelines and exercise therapy after axillary lymph node dissection for breast cancer is unlikely to have a medium to large effect in reducing the incidence of arm lymphoedema in the short term. Trial registration Netherlands Trial Register No NTR 1055.",
"title": ""
},
{
"docid": "cc45fefcf65e5ab30d5bb68d390beb4c",
"text": "In this paper, the basic running performance of the cylindrical tracked vehicle with sideways mobility is presented. The crawler mechanism is of circular cross-section and has active rolling axes at the center of the circles. Conventional crawler mechanisms can support massive loads, but cannot produce sideways motion. Additionally, previous crawler edges sink undesirably on soft ground, particularly when the vehicle body is subject to a sideways tilt. The proposed design solves these drawbacks by adopting a circular cross-section crawler. A prototype. Basic motion experiments with confirm the novel properties of this mechanism: sideways motion and robustness against edge-sink.",
"title": ""
},
{
"docid": "cf02d97cdcc1a4be51ed0af2af771b7d",
"text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.",
"title": ""
},
{
"docid": "03bd81d3c50b81c6cfbae847aa5611f6",
"text": "We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].",
"title": ""
},
{
"docid": "4d45fa7a0ff9f4c0c15bf32dd05ac8a7",
"text": "This paper presents a sub-nanosecond pulse generator intended for a transmitter of through-the-wall surveillance radar. The basis of the generator is a step recovery diode, which is used to sharpen the slow rise time edge of an input driving waveform. A unique pulse shaping technique is then applied to form an ultra-wideband Gaussian pulse. A simple transistor switching circuit was used to drive this Gaussian pulser, which transforms a TTL trigger signal to a driving pulse with the timing and amplitude parameters required by the step recovery diode. The maximum pulse repetition frequency of the generator is 20 MHz. High amplitude pulses are advantageous for obtaining a good radar range, especially when penetrating thick lossy walls. In order to increase the output power of the transmitter, the outputs of two identical generators were connected in parallel. The measurement results are presented, which show waveforms of the generated Gaussian pulses approximately 180 ps in width and over 32 V in amplitude.",
"title": ""
},
{
"docid": "16fa705e1e7bb49cd841070c261bcf26",
"text": "With the exponential growth of cyber-physical systems (CPSs), new security challenges have emerged. Various vulnerabilities, threats, attacks, and controls have been introduced for the new generation of CPS. However, there lacks a systematic review of the CPS security literature. In particular, the heterogeneity of CPS components and the diversity of CPS systems have made it difficult to study the problem with one generalized model. In this paper, we study and systematize existing research on CPS security under a unified framework. The framework consists of three orthogonal coordinates: 1) from the security perspective, we follow the well-known taxonomy of threats, vulnerabilities, attacks and controls; 2) from the CPS components perspective, we focus on cyber, physical, and cyber-physical components; and 3) from the CPS systems perspective, we explore general CPS features as well as representative systems (e.g., smart grids, medical CPS, and smart cars). The model can be both abstract to show general interactions of components in a CPS application, and specific to capture any details when needed. By doing so, we aim to build a model that is abstract enough to be applicable to various heterogeneous CPS applications; and to gain a modular view of the tightly coupled CPS components. Such abstract decoupling makes it possible to gain a systematic understanding of CPS security, and to highlight the potential sources of attacks and ways of protection. With this intensive literature review, we attempt to summarize the state-of-the-art on CPS security, provide researchers with a comprehensive list of references, and also encourage the audience to further explore this emerging field.",
"title": ""
}
] |
scidocsrr
|
9fd304dff1c99e997e465966b945772f
|
Multi-view Self-Paced Learning for Clustering
|
[
{
"docid": "32f0cc62e05f18e60f39d0c0595129e2",
"text": "Learning from multi-view data is important in many applications. In this paper, we propose a novel convex subspace representation learning method for unsupervised multi-view clustering. We first formulate the subspace learning with multiple views as a joint optimization problem with a common subspace representation matrix and a group sparsity inducing norm. By exploiting the properties of dual norms, we then show a convex min-max dual formulation with a sparsity inducing trace norm can be obtained. We develop a proximal bundle optimization algorithm to globally solve the minmax optimization problem. Our empirical study shows the proposed subspace representation learning method can effectively facilitate multi-view clustering and induce superior clustering results than alternative multiview clustering methods.",
"title": ""
}
] |
[
{
"docid": "b8b16474ba00399b44b83a28893d5f71",
"text": "PURPOSE\nTo compare the aqueous humor levels of proinflammatory and angiogenic factors of diabetic patients with and without retinopathy.\n\n\nMETHODS\nAqueous humor was collected at the start of cataract surgery from diabetic subjects and non-diabetic controls. The presence and severity of diabetic retinopathy were graded with fundus examination. Levels of 22 different inflammatory and angiogenic cytokines and chemokines were compared.\n\n\nRESULTS\nAqueous humor samples from 47 diabetic patients (20 without retinopathy, 27 with retinopathy) and 24 non-diabetic controls were included. Interleukin (IL)-2, IL-10, IL-12, interferon-alpha (IFN-α), and tumor necrosis factor (TNF)-α were measurable in significantly fewer diabetic samples, and where measurable, were at lower levels than in non-diabetic controls. IL-6 was measurable in significantly more diabetic samples, and the median levels were significantly higher in the eyes with retinopathy than the eyes without retinopathy and the non-diabetic eyes. The vascular endothelial growth factor (VEGF) level was significantly higher in the diabetic eyes with and without retinopathy compared to the non-diabetic controls. The IL-6 level positively correlated with the monocyte chemotactic protein-1 (CCL2) and interleukin-8 (CXCL8) levels and negatively with the TNF-α level. The VEGF level negatively correlated with the IL-12 and TNF-α levels.\n\n\nCONCLUSIONS\nThe aqueous humor cytokine profile of diabetic patients without retinopathy was similar to that of patients with diabetic retinopathy. These cytokines may be useful biomarkers for early detection and prognosis of diabetic retinopathy. Compared to diabetic patients without retinopathy, only the IL-6 and VEGF levels were significantly higher in diabetic patients with retinopathy.",
"title": ""
},
{
"docid": "c194bbdafb4211129e2306aaad09280f",
"text": "Commercial software project managers design project organizational structure carefully, mindful of available skills, division of labour, geographical boundaries, etc. These organizational \"cathedrals\" are to be contrasted with the \"bazaar-like\" nature of Open Source Software (OSS) Projects, which have no pre-designed organizational structure. Any structure that exists is dynamic, self-organizing, latent, and usually not explicitly stated. Still, in large, complex, successful, OSS projects, we do expect that subcommunities will form spontaneously within the developer teams. Studying these subcommunities, and their behavior can shed light on how successful OSS projects self-organize. This phenomenon could well hold important lessons for how commercial software teams might be organized. Building on known well-established techniques for detecting community structure in complex networks, we extract and study latent subcommunities from the email social network of several projects: Apache HTTPD, Python, PostgresSQL, Perl, and Apache ANT. We then validate them with software development activity history. Our results show that subcommunities do indeed spontaneously arise within these projects as the projects evolve. These subcommunities manifest most strongly in technical discussions, and are significantly connected with collaboration behaviour.",
"title": ""
},
{
"docid": "a2d97c2b71e6424d3f458b7730be0c90",
"text": "Fault detection in solar photovoltaic (PV) arrays is an essential task for increasing reliability and safety in PV systems. Because of PV's nonlinear characteristics, a variety of faults may be difficult to detect by conventional protection devices, leading to safety issues and fire hazards in PV fields. To fill this protection gap, machine learning techniques have been proposed for fault detection based on measurements, such as PV array voltage, current, irradiance, and temperature. However, existing solutions usually use supervised learning models, which are trained by numerous labeled data (known as fault types) and therefore, have drawbacks: 1) the labeled PV data are difficult or expensive to obtain, 2) the trained model is not easy to update, and 3) the model is difficult to visualize. To solve these issues, this paper proposes a graph-based semi-supervised learning model only using a few labeled training data that are normalized for better visualization. The proposed model not only detects the fault, but also further identifies the possible fault type in order to expedite system recovery. Once the model is built, it can learn PV systems autonomously over time as weather changes. Both simulation and experimental results show the effective fault detection and classification of the proposed method.",
"title": ""
},
{
"docid": "cca61271fe31513cb90c2ac7ecb0b708",
"text": "This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller’s effectiveness. Keywords—Rejection disturbance, fuzzy modelling, open-loop control, Fuzzy feedback controller, fuzzy observer, Linear Matrix Inequality (LMI)",
"title": ""
},
{
"docid": "643358b55155cab539188423c2b92713",
"text": "Recently, DevOps has emerged as an alternative for software organizations inserted into a dynamic market to handle daily software demands. As claimed, it intends to make the software development and operations teams to work collaboratively. However, it is hard to observe a shared understanding of DevOps, what potentially hinders the discussions in the literature and can confound observations when conducting empirical studies. Therefore, we performed a Multivocal Literature Review aiming at characterizing DevOps in multiple perspectives, including data sources from technical and gray literature. Grounded Theory procedures were used to rigorous analyze the collected data. It allowed us to achieve a grounded definition for DevOps, as well as to identify its recurrent principles, practices, required skills, potential benefits, challenges and what motivates the organizations to adopt it. Finally, we understand the DevOps movement has identified relevant issues in the state-of-the-practice. However, we advocate for the scientific investigations concerning the potential benefits and drawbacks as a consequence of adopting the suggested principles and practices.",
"title": ""
},
{
"docid": "ed282d88b5f329490f390372c502f238",
"text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.",
"title": ""
},
{
"docid": "11e19b59fa2df88f3468b4e71aab8cf4",
"text": "Blockchain is a distributed timestamp server technology introduced for realization of Bitcoin, a digital cash system. It has been attracting much attention especially in the areas of financial and legal applications. But such applications would fail if they are designed without knowledge of the fundamental differences in blockchain from existing technology. We show that blockchain is a probabilistic state machine in which participants can never commit on decisions, we also show that this probabilistic nature is necessarily deduced from the condition where the number of participants remains unknown. This work provides useful abstractions to think about blockchain, and raises discussion for promoting the better use of the technology.",
"title": ""
},
{
"docid": "d5b5ffbee82463af0ab0dfe90dddbc1b",
"text": "Supporting decision makers requires a good understanding of the various elements that affect the outcomes of a decision. Decision Support Systems have provided decision makers with such insights throughout its history of usage with varying degrees of success. The availability of data sources was a main limitation to what decision support systems can do. Therefore, with the advent of improved analytical methods for Big data sources new opportunities have emerged that can possibly enhance how decision makers analyze their problem and arrive at decisions using information systems. This paper analyzed current related works on both Big data and decision support systems to identify clear elements and factors relevant to the subject and identifying possible ways to enhance their joint usage. Finally, the paper proposes a framework that integrates the key components needed to ensure the quality and relevance of data being analyzed by decision support systems while providing the benefits of insights generated over time from past decisions and positive",
"title": ""
},
{
"docid": "be3466a43f12f66b222ffdc60f71c6a0",
"text": "Clothing with conductive textiles for health care applications has in the last decade been of an upcoming research interest. An advantage with the technique is its suitability in distributed and home health care. The present study investigates the electrical properties of conductive yarns and textile electrodes in contact with human skin, thus representing a real ECG-registration situation. The yarn measurements showed a pure resistive characteristic proportional to the length. The electrodes made of pure stainless steel (electrode A) and 20% stainless steel/80% polyester (electrode B) showed acceptable stability of electrode potentials, the stability of A was better than that of B. The electrode made of silver plated copper (electrode C) was less stable. The electrode impedance was lower for electrodes A and B than that for electrode C. From an electrical properties point of view we recommend to use electrodes of type A to be used in intelligent textile medical applications.",
"title": ""
},
{
"docid": "1f105cc2459e1b64ffcb4d836d6e53f7",
"text": "In this paper we propose an approach to predict punctuation marks for unsegmented speech transcript. The approach is purely lexical, with pre-trained Word Vectors as the only input. A training model of Deep Neural Network (DNN) or Convolutional Neural Network (CNN) is applied to classify whether a punctuation mark should be inserted after the third word of a 5-words sequence and which kind of punctuation mark the inserted one should be. TED talks within IWSLT dataset are used in both training and evaluation phases. The proposed approach shows its effectiveness by achieving better result than the state-of-the-art lexical solution which works with same type of data, especially when predicting puncuation position only.",
"title": ""
},
{
"docid": "6f99c3fe7d99aa7f00a3e3eb8856db97",
"text": "The 3-D modeling technique presented in this paper, predicts, with high accuracy, electromagnetic fields and corresponding dynamic effects in conducting regions for rotating machines with slotless windings, e.g., self-supporting windings. The presented modeling approach can be applied to a wide variety of slotless winding configurations, including skewing and/or different winding shapes. It is capable to account for induced eddy currents in the conductive rotor parts, e.g., permanent-magnet (PM) eddy-current losses, albeit not iron, and winding ac losses. The specific focus of this paper is to provide the reader with the complete implementation and assumptions details of such a 3-D semianalytical approach, which allows model validations with relatively short calculation times. This model can be used to improve future design optimizations for machines with 3-D slotless windings. It has been applied, in this paper, to calculate fixed parameter Faulhaber, rhombic, and diamond slotless PM machines to illustrate accuracy and applicability.",
"title": ""
},
{
"docid": "7a6691ce9d93b42179cd2ce954aeb8c5",
"text": "In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate the motion demonstrated by a virtual teacher projected on the wall screen. Meanwhile, the student's motions will be captured and analyzed by the system based on which feedback is given back to them. The result of user studies showed that our system can successfully guide students to improve their skills. The subjects agreed that the system is interesting and can motivate them to learn.",
"title": ""
},
{
"docid": "72e0824602462a21781e9a881041e726",
"text": "In an effort to develop a genomics-based approach to the prediction of drug response, we have developed an algorithm for classification of cell line chemosensitivity based on gene expression profiles alone. Using oligonucleotide microarrays, the expression levels of 6,817 genes were measured in a panel of 60 human cancer cell lines (the NCI-60) for which the chemosensitivity profiles of thousands of chemical compounds have been determined. We sought to determine whether the gene expression signatures of untreated cells were sufficient for the prediction of chemosensitivity. Gene expression-based classifiers of sensitivity or resistance for 232 compounds were generated and then evaluated on independent sets of data. The classifiers were designed to be independent of the cells' tissue of origin. The accuracy of chemosensitivity prediction was considerably better than would be expected by chance. Eighty-eight of 232 expression-based classifiers performed accurately (with P < 0.05) on an independent test set, whereas only 12 of the 232 would be expected to do so by chance. These results suggest that at least for a subset of compounds genomic approaches to chemosensitivity prediction are feasible.",
"title": ""
},
{
"docid": "505aff71acf5469dc718b8168de3e311",
"text": "We propose two suffix array inspired full-text indexes. One, called SAhash, augments the suffix array with a hash table to speed up pattern searches due to significantly narrowed search interval before the binary search phase. The other, called FBCSA, is a compact data structure, similar to Mäkinen’s compact suffix array, but working on fixed sized blocks. Experiments on the Pizza & Chili 200MB datasets show that SA-hash is about 2–3 times faster in pattern searches (counts) than the standard suffix array, for the price of requiring 0.2n− 1.1n bytes of extra space, where n is the text length, and setting a minimum pattern length. FBCSA is relatively fast in single cell accesses (a few times faster than related indexes at about the same or better compression), but not competitive if many consecutive cells are to be extracted. Still, for the task of extracting, e.g., 10 successive cells its time-space relation remains attractive.",
"title": ""
},
{
"docid": "83ac82ef100fdf648a5214a50d163fe3",
"text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.",
"title": ""
},
{
"docid": "63c3e74f2d26dde9a0cdbd7161348197",
"text": "We assessed brain activation of nine normal right-handed volunteers in a positron emission tomography study designed to differentiate the functional anatomy of the two major components of auditory comprehension of language, namely phonological versus lexico-semantic processing. The activation paradigm included three tasks. In the reference task, subjects were asked to detect rising pitch within a series of pure tones. In the phonological task, they had to monitor the sequential phonemic organization of non-words. In the lexico-semantic task, they monitored concrete nouns according to semantic criteria. We found highly significant and different patterns of activation. Phonological processing was associated with activation in the left superior temporal gyrus (mainly Wernicke's area) and, to a lesser extent, in Broca's area and in the right superior temporal regions. Lexico-semantic processing was associated with activity in the left middle and inferior temporal gyri, the left inferior parietal region and the left superior prefrontal region, in addition to the superior temporal regions. A comparison of the pattern of activation obtained with the lexico-semantic task to that obtained with the phonological task was made in order to account for the contribution of lower stage components to semantic processing. No difference in activation was found in Broca's area and superior temporal areas which suggests that these areas are activated by the phonological component of both tasks, but activation was noted in the temporal, parietal and frontal multi-modal association areas. These constitute parts of a large network that represent the specific anatomic substrate of the lexico-semantic processing of language.",
"title": ""
},
{
"docid": "9b2f17d76fd0e44059d29083a931f2f1",
"text": "This paper presents a security system based on speaker identification. Mel frequency Cepstral Coefficients{MFCCs} have been used for feature extraction and vector quantization technique is used to minimize the amount of data to be handled .",
"title": ""
},
{
"docid": "1b581e17dad529b3452d3fbdcb1b3dd1",
"text": "Authorship attribution is the task of identifying the author of a given text. The main concern of this task is to define an appropriate characterization of documents that captures the writing style of authors. This paper proposes a new method for authorship attribution supported on the idea that a proper identification of authors must consider both stylistic and topic features of texts. This method characterizes documents by a set of word sequences that combine functional and content words. The experimental results on poem classification demonstrated that this method outperforms most current state-of-the-art approaches, and that it is appropriate to handle the attribution of short documents.",
"title": ""
},
{
"docid": "bc0530b0dc56b4e4b4186a11742c9b5b",
"text": "A dual-polarized aperture-coupled magneto-electric (ME) dipole antenna is proposed. Two separate substrate-integrated waveguides (SIWs) implemented in two printed circuit board (PCB) laminates are used to feed the antenna. The simulated -10-dB impedance bandwidth of the antenna is 21% together with an isolation of over 45 dB between the two input ports. Good radiation characteristics, including almost identical unidirectional radiation patterns in two orthogonal planes, frontto-back ratio larger than 20 dB, cross-polarization levels less than -23 dB, and a stable gain around 8 dBi over the operating band, are achieved. By employing the proposed radiating element, a 2 × 2 wideband antenna array working at the 60GHz band is designed, fabricated, and tested, which can generate two-dimensional (2-D) multiple beams with dual polarization. A measured -10 dB impedance bandwidth wider than 22% and a gain up to 12.5 dBi are obtained. Owing to the superiority of the ME dipole, the radiation pattern of the array is also stable over the operating frequencies and nearly identical in two orthogonal planes for both of the polarizations. With advantages of desirable performance, convenience of fabrication and integration, and low cost, the proposed antenna and array are attractive for millimeter-wave wireless communication systems.",
"title": ""
},
{
"docid": "4b57b59f475a643b281a1ee5e49c87bd",
"text": "In this paper we present a Model Predictive Control (MPC) approach for combined braking and steering systems in autonomous vehicles. We start from the result presented in (Borrelli et al. (2005)) and (Falcone et al. (2007a)), where a Model Predictive Controller (MPC) for autonomous steering systems has been presented. As in (Borrelli et al. (2005)) and (Falcone et al. (2007a)) we formulate an MPC control problem in order to stabilize a vehicle along a desired path. In the present paper, the control objective is to best follow a given path by controlling the front steering angle and the brakes at the four wheels independently, while fulfilling various physical and design constraints.",
"title": ""
}
] |
scidocsrr
|
48deb853e042f16ae886e24b7ce5692d
|
Designing location-based mobile games with a purpose: collecting geospatial data with CityExplorer
|
[
{
"docid": "393d3f3061940f98e5f3e4ed919f7f6d",
"text": "Through online games, people can collectively solve large-scale computational problems. E ach year, people around the world spend billions of hours playing computer games. What if all this time and energy could be channeled into useful work? What if people playing computer games could, without consciously doing so, simultaneously solve large-scale problems? Despite colossal advances over the past 50 years, computers still don't possess the basic conceptual intelligence or perceptual capabilities that most humans take for granted. If we treat human brains as processors in a distributed system, each can perform a small part of a massive computation. Such a \" human computation \" paradigm has enormous potential to address problems that computers can't yet tackle on their own and eventually teach computers many of these human talents. Unlike computer processors, humans require some incentive to become part of a collective computation. Online games are a seductive method for encouraging people to participate in the process. Such games constitute a general mechanism for using brain power to solve open problems. In fact, designing such a game is much like designing an algorithm—it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. Instead of using a silicon processor, these \" algorithms \" run on a processor consisting of ordinary humans interacting with computers over the Internet. \" Games with a purpose \" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering , and Internet search. Two such games under development at Carnegie Mellon University, the ESP Game and Peekaboom, demonstrate how humans , as they play, can solve problems that computers can't yet solve. Several important online applications such as search engines and accessibility programs for the visually impaired require accurate image descriptions. However, there are no guidelines about providing appropriate textual descriptions for the millions of images on the Web, and computer vision can't yet accurately determine their content. Current techniques used to categorize images for these applications are inadequate, largely because they assume that image content on a Web page is related to adjacent text. Unfortunately, the text near an image is often scarce or misleading and can be hard to process. Manual labeling is traditionally the only method for obtaining precise image descriptions, but this tedious and labor-intensive process is extremely costly. The ESP Game …",
"title": ""
}
] |
[
{
"docid": "ebbc824d48e27bce8a49aecc83ff11fa",
"text": "This work studies the semantic segmentation of 3D LiDAR data in dynamic scenes for autonomous driving applications. A system of semantic segmentation using 3D LiDAR data, including range image segmentation, sample generation, inter-frame data association, track-level annotation and semisupervised learning, is developed. To reduce the considerable requirement of fine annotations, a CNN-based classifier is trained by considering both supervised samples with manually labeled object classes and pairwise constraints, where a data sample is composed of a segment as the foreground and neighborhood points as the background. A special loss function is designed to account for both annotations and constraints, where the constraint data are encouraged to be assigned to the same semantic class. A dataset containing 1838 frames of LiDAR data, 39934 pairwise constraints and 57927 human annotations is developed. The performance of the method is examined extensively. Qualitative and quantitative experiments show that the combination of a few annotations and large amount of constraint data significantly enhances the effectiveness and scene adaptability, resulting in greater than 10% improvement.",
"title": ""
},
{
"docid": "f2ce4c6d0dfa59cfe600171a122cdc94",
"text": "We describe the methodology that we followed to automatically extract topics corresponding to known events provided by the SNOW 2014 challenge in the context of the SocialSensor project. A data crawling tool and selected filtering terms were provided to all the teams. The crawled data was to be divided in 96 (15-minute) timeslots spanning a 24 hour period and participants were asked to produce a fixed number of topics for the selected timeslots. Our preliminary results are obtained using a methodology that pulls strengths from several machine learning techniques, including Latent Dirichlet Allocation (LDA) for topic modeling and Non-negative Matrix Factorization (NMF) for automated hashtag annotation and for mapping the topics into a latent space where they become less fragmented and can be better related with one another. In addition, we obtain improved topic quality when Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: S. Papadopoulos, D. Corney, L. Aiello (eds.): Proceedings of the SNOW 2014 Data Challenge, Seoul, Korea, 08-04-2014, published at http://ceur-ws.org sentiment detection is performed to partition the tweets based on polarity, prior to topic modeling.",
"title": ""
},
{
"docid": "025e76755193277b2ea55d06d4f22d03",
"text": "Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a “bioink” to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions.",
"title": ""
},
{
"docid": "bb617f8cccfe47dc3b5fa10326393bc9",
"text": "In the past decade, the availability of powerful molecular techniques has accelerated the pace of discovery of several new primary immunodeficiencies (PIDs) and revealed the biologic basis of other established PIDs. These genetic advances, in turn, have facilitated more precise phenotyping of associated skin and systemic manifestations and provide a unique opportunity to better understand the complex human immunologic response. These continuing medical education articles will provide an update of recent advances in PIDs that may be encountered by dermatologists through their association with eczematous dermatitis, infectious, and non-infectious cutaneous manifestations. Part I will discuss new primary immunodeficiencies that have an eczematous dermatitis. Part II will focus on primary immunodeficiencies that greatly increase susceptibility to fungal infection and the noninfectious presentations of PIDs.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "1043fd2e3eb677a768e922f5daf5a5d0",
"text": "A transformer magnetizing current offset for a phase-shift full-bridge (PSFB) converter is dealt in this paper. A model of this current offset is derived and it is presented as a first order system having a pole at a low frequency when the effects from the parasitic components and the switching transition are considered. A digital offset compensator eliminating this current offset is proposed and designed considering the interference in an output voltage regulation. The performances of the proposed compensator are verified by experiments with a 1.2kW PSFB converter. The saturation of the transformer is prevented by this compensator.",
"title": ""
},
{
"docid": "0da4b25ce3d4449147f7258d0189165f",
"text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.",
"title": ""
},
{
"docid": "16426be05f066e805e48a49a82e80e2e",
"text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.",
"title": ""
},
{
"docid": "578973539dbc323f812ecaf1bb57400f",
"text": "In light of the Office of the Secretary Defense’s Roadmap for unmanned aircraft systems (UASs), there is a critical need for research examining human interaction with heterogeneous unmanned vehicles. The OSD Roadmap clearly delineates the need to investigate the “appropriate conditions and requirements under which a single pilot would be allowed to control multiple airborne UA [unmanned aircraft] simultaneously”. Towards this end, in this paper, we provide a meta-analysis of research studies across unmanned aerial and ground vehicle domains that investigated single operator control of multiple vehicles. As a result, a hierarchical control model for single operator control of multiple unmanned vehicles (UV) is proposed that demonstrates those requirements that will need to be met for operator cognitive support of multiple UV control, with an emphasis on the introduction of higher levels of autonomy. The challenge in achieving effective management of multiple UV systems in the future is not only to determine if automation can be used to improve human and system performance, but how and to what degree across hierarchical control loops, as well as determining the types of decision support that will be needed by operators given the high workload environment. We address when and how increasing levels of automation should be incorporated in multiple UV systems and discuss the impact on not only human performance, but more importantly, system performance.",
"title": ""
},
{
"docid": "38fccb4ef1b53ccc8464beaf74db2b4b",
"text": "The novel concept of total generalized variation of a function u is introduced and some of its essential properties are proved. Differently from the bounded variation semi-norm, the new concept involves higher order derivatives of u. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and does not lead to a staircasing effect.",
"title": ""
},
{
"docid": "fde101a0604eaa703979c56aa3ab8e93",
"text": "Community Question Answering (cQA) forums have become a popular medium for soliciting direct answers to specific questions of users from experts or other experienced users on a given topic. However, for a given question, users sometimes have to sift through a large number of low-quality or irrelevant answers to find out the answer which satisfies their information need. To alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict the quality of an answer posted in response to a forum question. Current AQP systems either learn models using a) various hand-crafted features (HCF) or b) use deep learning (DL) techniques which automatically learn the required feature representations. In this paper, we propose a novel approach for AQP known as -“Deep Feature Fusion Network (DFFN)”which leverages the advantages of both hand-crafted features and deep learning based systems. Given a question-answer pair along with its metadata, DFFN independently a) learns deep features using a Convolutional Neural Network (CNN) and b) computes hand-crafted features using various external resources and then combines them using a deep neural network trained to predict the final answer quality. DFFN achieves stateof-the-art performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets and outperforms baseline approaches which individually employ either HCF or DL based techniques alone.",
"title": ""
},
{
"docid": "e864bccfa711a5e773390524cd826808",
"text": "Semantic similarity measures estimate the similarity between concepts, and play an important role in many text processing tasks. Approaches to semantic similarity in the biomedical domain can be roughly divided into knowledge based and distributional based methods. Knowledge based approaches utilize knowledge sources such as dictionaries, taxonomies, and semantic networks, and include path finding measures and intrinsic information content (IC) measures. Distributional measures utilize, in addition to a knowledge source, the distribution of concepts within a corpus to compute similarity; these include corpus IC and context vector methods. Prior evaluations of these measures in the biomedical domain showed that distributional measures outperform knowledge based path finding methods; but more recent studies suggested that intrinsic IC based measures exceed the accuracy of distributional approaches. Limitations of previous evaluations of similarity measures in the biomedical domain include their focus on the SNOMED CT ontology, and their reliance on small benchmarks not powered to detect significant differences between measure accuracy. There have been few evaluations of the relative performance of these measures on other biomedical knowledge sources such as the UMLS, and on larger, recently developed semantic similarity benchmarks. We evaluated knowledge based and corpus IC based semantic similarity measures derived from SNOMED CT, MeSH, and the UMLS on recently developed semantic similarity benchmarks. Semantic similarity measures based on the UMLS, which contains SNOMED CT and MeSH, significantly outperformed those based solely on SNOMED CT or MeSH across evaluations. Intrinsic IC based measures significantly outperformed path-based and distributional measures. We released all code required to reproduce our results and all tools developed as part of this study as open source, available under http://code.google.com/p/ytex . We provide a publicly-accessible web service to compute semantic similarity, available under http://informatics.med.yale.edu/ytex.web/ . Knowledge based semantic similarity measures are more practical to compute than distributional measures, as they do not require an external corpus. Furthermore, knowledge based measures significantly and meaningfully outperformed distributional measures on large semantic similarity benchmarks, suggesting that they are a practical alternative to distributional measures. Future evaluations of semantic similarity measures should utilize benchmarks powered to detect significant differences in measure accuracy.",
"title": ""
},
{
"docid": "396c9da61a3f7c21544278e0396eb689",
"text": "There are several challenges in down-sizing robots for transportation deployment, diversification of locomotion capabilities tuned for various terrains, and rapid and on-demand manufacturing. In this paper we propose an origami-inspired method of addressing these key issues by designing and manufacturing a foldable, deployable, and self-righting version of the origami robot Tribot. Our latest Tribot prototype can jump as high as 215 mm, five times its height, and roll consecutively on any of its edges with an average step size of 55 mm. The 4 g robot self-deploys nine times of its size when released. A compliant roll cage ensures that the robot self-rights onto two legs after jumping or being deployed and also protects the robot from impacts. A description of our prototype and its design, locomotion modes, and fabrication is followed by demonstrations of its key features.",
"title": ""
},
{
"docid": "0d2e5667545ebc9380416f9f625dd836",
"text": "New developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home. Video-monitoring, remote health monitoring, electronic sensors and equipment such as fall detectors, door monitors, bed alerts, pressure mats and smoke and heat alarms can improve older people's safety, security and ability to cope at home. Care at home is often preferable to patients and is usually less expensive for care providers than institutional alternatives.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "c8ebf32413410a5d91defbb19a73b6f3",
"text": "BACKGROUND\nAudit and feedback is widely used as a strategy to improve professional practice either on its own or as a component of multifaceted quality improvement interventions. This is based on the belief that healthcare professionals are prompted to modify their practice when given performance feedback showing that their clinical practice is inconsistent with a desirable target. Despite its prevalence as a quality improvement strategy, there remains uncertainty regarding both the effectiveness of audit and feedback in improving healthcare practice and the characteristics of audit and feedback that lead to greater impact.\n\n\nOBJECTIVES\nTo assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes and to examine factors that may explain variation in the effectiveness of audit and feedback.\n\n\nSEARCH METHODS\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2010, Issue 4, part of The Cochrane Library. www.thecochranelibrary.com, including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register (searched 10 December 2010); MEDLINE, Ovid (1950 to November Week 3 2010) (searched 09 December 2010); EMBASE, Ovid (1980 to 2010 Week 48) (searched 09 December 2010); CINAHL, Ebsco (1981 to present) (searched 10 December 2010); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to present) (searched 12-15 September 2011).\n\n\nSELECTION CRITERIA\nRandomised trials of audit and feedback (defined as a summary of clinical performance over a specified period of time) that reported objectively measured health professional practice or patient outcomes. In the case of multifaceted interventions, only trials in which audit and feedback was considered the core, essential aspect of at least one intervention arm were included.\n\n\nDATA COLLECTION AND ANALYSIS\nAll data were abstracted by two independent review authors. For the primary outcome(s) in each study, we calculated the median absolute risk difference (RD) (adjusted for baseline performance) of compliance with desired practice compliance for dichotomous outcomes and the median percent change relative to the control group for continuous outcomes. Across studies the median effect size was weighted by number of health professionals involved in each study. We investigated the following factors as possible explanations for the variation in the effectiveness of interventions across comparisons: format of feedback, source of feedback, frequency of feedback, instructions for improvement, direction of change required, baseline performance, profession of recipient, and risk of bias within the trial itself. We also conducted exploratory analyses to assess the role of context and the targeted clinical behaviour. Quantitative (meta-regression), visual, and qualitative analyses were undertaken to examine variation in effect size related to these factors.\n\n\nMAIN RESULTS\nWe included and analysed 140 studies for this review. In the main analyses, a total of 108 comparisons from 70 studies compared any intervention in which audit and feedback was a core, essential component to usual care and evaluated effects on professional practice. After excluding studies at high risk of bias, there were 82 comparisons from 49 studies featuring dichotomous outcomes, and the weighted median adjusted RD was a 4.3% (interquartile range (IQR) 0.5% to 16%) absolute increase in healthcare professionals' compliance with desired practice. Across 26 comparisons from 21 studies with continuous outcomes, the weighted median adjusted percent change relative to control was 1.3% (IQR = 1.3% to 28.9%). For patient outcomes, the weighted median RD was -0.4% (IQR -1.3% to 1.6%) for 12 comparisons from six studies reporting dichotomous outcomes and the weighted median percentage change was 17% (IQR 1.5% to 17%) for eight comparisons from five studies reporting continuous outcomes. Multivariable meta-regression indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, it is delivered in both verbal and written formats, and when it includes both explicit targets and an action plan. In addition, the effect size varied based on the clinical behaviour targeted by the intervention.\n\n\nAUTHORS' CONCLUSIONS\nAudit and feedback generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided. Future studies of audit and feedback should directly compare different ways of providing feedback.",
"title": ""
},
{
"docid": "215ccfeaf75d443e8eb6ead8172c9b92",
"text": "Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.",
"title": ""
},
{
"docid": "0c12178e7c7d5c66343bb5a152b42fca",
"text": "This study was a randomized controlled trial to investigate the effect of treating women with stress or mixed urinary incontinence (SUI or MUI) by diaphragmatic, deep abdominal and pelvic floor muscle (PFM) retraining. Seventy women were randomly allocated to the training (n = 35) or control group (n = 35). Women in the training group received 8 individual clinical visits and followed a specific exercise program. Women in the control group performed self-monitored PFM exercises at home. The primary outcome measure was self-reported improvement. Secondary outcome measures were 20-min pad test, 3-day voiding diary, maximal vaginal squeeze pressure, holding time and quality of life. After a 4-month intervention period, more participants in the training group reported that they were cured or improved (p < 0.01). The cure/improved rate was above 90%. Both amount of leakage and number of leaks were significantly lower in the training group (p < 0.05) but not in the control group. More aspects of quality of life improved significantly in the training group than in the control group. Maximal vaginal squeeze pressure, however, decreased slightly in both groups. Coordinated retraining diaphragmatic, deep abdominal and PFM function could improve symptoms and quality of life. It may be an alternative management for women with SUI or MUI.",
"title": ""
},
{
"docid": "9dbea5d01d446bd829085e445f11c5a7",
"text": "We present the results of a large-scale, end-to-end human evaluation of various sentiment summarization models. The evaluation shows that users have a strong preference for summarizers that model sentiment over non-sentiment baselines, but have no broad overall preference between any of the sentiment-based models. However, an analysis of the human judgments suggests that there are identifiable situations where one summarizer is generally preferred over the others. We exploit this fact to build a new summarizer by training a ranking SVM model over the set of human preference judgments that were collected during the evaluation, which results in a 30% relative reduction in error over the previous best summarizer.",
"title": ""
},
{
"docid": "46e81dc6b3b32f61471b91f71672a80f",
"text": "The sparsity of images in a fixed analytic transform domain or dictionary such as DCT or Wavelets has been exploited in many applications in image processing including image compression. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular in image processing. However, the idea of learning sparsifying transforms has received only little attention. We propose a novel problem formulation for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our approach as compared to analytical sparsifying transforms such as DCT for image representation.",
"title": ""
}
] |
scidocsrr
|
08aeb2a2dc07039ba142f4c1b55ed35b
|
Toward Personalized Relational Learning
|
[
{
"docid": "d6be66d70d9df15cdc14a0a20edc71b3",
"text": "Multi-label learning has been extensively studied in the area of bioinformatics, information retrieval, multimedia annotation, etc. In multi-label learning, each instance is associated with multiple interdependent class labels, the label information can be noisy and incomplete. In addition, multi-labeled data often has noisy, irrelevant and redundant features of high dimensionality. As an effective data preprocessing step, feature selection has shown its effectiveness to prepare high-dimensional data for numerous data mining and machine learning tasks. Most of existing multi-label feature selection algorithms either boil down to solving multiple singlelabeled feature selection problems or directly make use of imperfect labels. Therefore, they may not be able to find discriminative features that are shared by multiple labels. In this paper, we propose a novel multi-label informed feature selection framework MIFS, which exploits label correlations to select discriminative features across multiple labels. Specifically, to reduce the negative effects of imperfect label information in finding label correlations, we decompose the multi-label information into a low-dimensional space and then employ the reduced space to steer the feature selection process. Empirical studies on real-world datasets demonstrate the effectiveness and efficiency of the proposed framework.",
"title": ""
},
{
"docid": "e83b5781771cd5638c37dd5d90e7bc1e",
"text": "The explosive growth of social media sites brings about massive amounts of high-dimensional data. Feature selection is effective in preparing high-dimensional data for data analytics. The characteristics of social media present novel challenges for feature selection. First, social media data is not fully structured and its features are usually not predefined, but are generated dynamically. For example, in Twitter, slang words (features) are created everyday and quickly become popular within a short period of time. It is hard to directly apply traditional batch-mode feature selection methods to find such features. Second, given the nature of social media, label information is costly to collect. It exacerbates the problem of feature selection without knowing feature relevance. On the other hand, opportunities are also unequivocally present with additional data sources; for example, link information is ubiquitous in social media and could be helpful in selecting relevant features. In this paper, we study a novel problem to conduct unsupervised streaming feature selection for social media data. We investigate how to exploit link information in streaming feature selection, resulting in a novel unsupervised streaming feature selection framework USFS. Experimental results on two real-world social media datasets show the effectiveness and efficiency of the proposed framework comparing with the state-of-the-art unsupervised feature selection algorithms.",
"title": ""
}
] |
[
{
"docid": "3faf7b3909cadabd78face44d4dc28bd",
"text": "Very high resolution (VHR) remote sensing imagery has been used for land cover classification, and it tends to a transition from land-use classification to pixel-level semantic segmentation. Inspired by the recent success of deep learning and the filter method in computer vision, this work provides a segmentation model, which designs an image segmentation neural network based on the deep residual networks and uses a guided filter to extract buildings in remote sensing imagery. Our method includes the following steps: first, the VHR remote sensing imagery is preprocessed and some hand-crafted features are calculated. Second, a designed deep network architecture is trained with the urban district remote sensing image to extract buildings at the pixel level. Third, a guided filter is employed to optimize the classification map produced by deep learning; at the same time, some salt-and-pepper noise is removed. Experimental results based on the Vaihingen and Potsdam datasets demonstrate that our method, which benefits from neural networks and guided filtering, achieves a higher overall accuracy when compared with other machine learning and deep learning methods. The method proposed shows outstanding performance in terms of the building extraction from diversified objects in the urban district.",
"title": ""
},
{
"docid": "28d16f96ee1b7789666352f48876fbc4",
"text": "The non-data components of a visualization, such as axes and legends, can often be just as important as the data itself. They provide contextual information essential to interpreting the data. In this paper, we describe an automated system for choosing positions and labels for axis tick marks. Our system extends Wilkinson's optimization-based labeling approach to create a more robust, full-featured axis labeler. We define an expanded space of axis labelings by automatically generating additional nice numbers as needed and by permitting the extreme labels to occur inside the data range. These changes provide flexibility in problematic cases, without degrading quality elsewhere. We also propose an additional optimization criterion, legibility, which allows us to simultaneously optimize over label formatting, font size, and orientation. To solve this revised optimization problem, we describe the optimization function and an efficient search algorithm. Finally, we compare our method to previous work using both quantitative and qualitative metrics. This paper is a good example of how ideas from automated graphic design can be applied to information visualization.",
"title": ""
},
{
"docid": "272d169020eda0983de52b88c9186501",
"text": "Personas are user models that represent the user characteristics. In this paper we describe a Persona creation process which combines the quantitative method such as cluster analysis with qualitative method such as observation and interview to produce convincing and representative Personas. We illustrate the Personas creation process through a case study. We use cluster analysis to group the users by their similarities in goals and decision-making preference.",
"title": ""
},
{
"docid": "2e16758c0f55cd44b88c18b8948ec1cb",
"text": "We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays the key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities, our Sintel-trained CNN produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"title": ""
},
{
"docid": "3731d3071b7447e888567c078e39bf80",
"text": "Mixed-type categorical and numerical data are a challenge in many applications. This general area of mixed-type data is among the frontier areas, where computational intelligence approaches are often brittle compared with the capabilities of living creatures. In this paper, unsupervised feature learning (UFL) is applied to the mixed-type data to achieve a sparse representation, which makes it easier for clustering algorithms to separate the data. Unlike other UFL methods that work with homogeneous data, such as image and video data, the presented UFL works with the mixed-type data using fuzzy adaptive resonance theory (ART). UFL with fuzzy ART (UFLA) obtains a better clustering result by removing the differences in treating categorical and numeric features. The advantages of doing this are demonstrated with several real-world data sets with ground truth, including heart disease, teaching assistant evaluation, and credit approval. The approach is also demonstrated on noisy, mixed-type petroleum industry data. UFLA is compared with several alternative methods. To the best of our knowledge, this is the first time UFL has been extended to accomplish the fusion of mixed data types.",
"title": ""
},
{
"docid": "a87b48ee446cbda34e8d878cffbd19bb",
"text": "Introduction. In spite of significant changes in the management policies of intersexuality, clinical evidence show that not all pubertal or adult individuals live according to the assigned sex during infancy. Aim. The purpose of this study was to analyze the clinical management of an individual diagnosed as a female pseudohermaphrodite with congenital adrenal hyperplasia (CAH) simple virilizing form four decades ago but who currently lives as a monogamous heterosexual male. Methods. We studied the clinical files spanning from 1965 to 1991 of an intersex individual. In addition, we conducted a magnetic resonance imaging (MRI) study of the abdominoplevic cavity and a series of interviews using the oral history method. Main Outcome Measures. Our analysis is based on the clinical evidence that led to the CAH diagnosis in the 1960s in light of recent clinical testing to confirm such diagnosis. Results. Analysis of reported values for 17-ketosteroids, 17-hydroxycorticosteroids, from 24-hour urine samples during an 8-year period showed poor adrenal suppression in spite of adherence to treatment. A recent MRI study confirmed the presence of hyperplastic adrenal glands as well as the presence of a prepubertal uterus. Semistructured interviews with the individual confirmed a life history consistent with a male gender identity. Conclusions. Although the American Academy of Pediatrics recommends that XX intersex individuals with CAH should be assigned to the female sex, this practice harms some individuals as they may self-identify as males. In the absence of comorbid psychiatric factors, the discrepancy between infant sex assignment and gender identity later in life underlines the need for a reexamination of current standards of care for individuals diagnosed with CAH. Jorge JC, Echeverri C, Medina Y, and Acevedo P. Male gender identity in an xx individual with congenital adrenal hyperplasia. J Sex Med 2008;5:122–131.",
"title": ""
},
{
"docid": "db1537ee5c95f97a7e1146bc4fd68bf0",
"text": "BACKGROUND\nElotuzumab, an immunostimulatory monoclonal antibody targeting signaling lymphocytic activation molecule F7 (SLAMF7), showed activity in combination with lenalidomide and dexamethasone in a phase 1b-2 study in patients with relapsed or refractory multiple myeloma.\n\n\nMETHODS\nIn this phase 3 study, we randomly assigned patients to receive either elotuzumab plus lenalidomide and dexamethasone (elotuzumab group) or lenalidomide and dexamethasone alone (control group). Coprimary end points were progression-free survival and the overall response rate. Final results for the coprimary end points are reported on the basis of a planned interim analysis of progression-free survival.\n\n\nRESULTS\nOverall, 321 patients were assigned to the elotuzumab group and 325 to the control group. After a median follow-up of 24.5 months, the rate of progression-free survival at 1 year in the elotuzumab group was 68%, as compared with 57% in the control group; at 2 years, the rates were 41% and 27%, respectively. Median progression-free survival in the elotuzumab group was 19.4 months, versus 14.9 months in the control group (hazard ratio for progression or death in the elotuzumab group, 0.70; 95% confidence interval, 0.57 to 0.85; P<0.001). The overall response rate in the elotuzumab group was 79%, versus 66% in the control group (P<0.001). Common grade 3 or 4 adverse events in the two groups were lymphocytopenia, neutropenia, fatigue, and pneumonia. Infusion reactions occurred in 33 patients (10%) in the elotuzumab group and were grade 1 or 2 in 29 patients.\n\n\nCONCLUSIONS\nPatients with relapsed or refractory multiple myeloma who received a combination of elotuzumab, lenalidomide, and dexamethasone had a significant relative reduction of 30% in the risk of disease progression or death. (Funded by Bristol-Myers Squibb and AbbVie Biotherapeutics; ELOQUENT-2 ClinicalTrials.gov number, NCT01239797.).",
"title": ""
},
{
"docid": "7c7d8f00e54bea76f44344bdf85fdd28",
"text": "Named Entity Recognition (NER) is a subtask of information extraction in Natural Language Processing (NLP) field and thus being wildly studied. Currently Recurrent Neural Network (RNN) has become a popular way to do NER task, but it needs a lot of train data. The lack of labeled train data is one of the hard problems and traditional co-training strategy is a way to alleviate it. In this paper, we consider this situation and focus on doing NER with co-training using RNN and two probability statistic models i.e. Hidden Markov Model (HMM) and Conditional Random Field (CRF). We proposed a modified RNN model by redefining its activation function. Compared to traditional sigmoid function, our new function avoids saturation to some degree and makes its output scope very close to [0, 1], thus improving recognition accuracy. Our experiments are conducted ATIS benchmark. First, supervised learning using those models are compared when using different train data size. The experimental results show that it is not necessary to use whole data, even small part of train data can also get good performance. Then, we compare the results of our modified RNN with original RNN. 0.5% improvement is obtained. Last, we compare the co-training results. HMM and CRF get higher improvement than RNN after co-training. Moreover, using our modified RNN in co-training, their performances are improved further. AQ1",
"title": ""
},
{
"docid": "bbfe1231795d0885f7d9a993e4c871d3",
"text": "The current research tested the hypothesis that making many choices impairs subsequent self-control. Drawing from a limited-resource model of self-regulation and executive function, the authors hypothesized that decision making depletes the same resource used for self-control and active responding. In 4 laboratory studies, some participants made choices among consumer goods or college course options, whereas others thought about the same options without making choices. Making choices led to reduced self-control (i.e., less physical stamina, reduced persistence in the face of failure, more procrastination, and less quality and quantity of arithmetic calculations). A field study then found that reduced self-control was predicted by shoppers' self-reported degree of previous active decision making. Further studies suggested that choosing is more depleting than merely deliberating and forming preferences about options and more depleting than implementing choices made by someone else and that anticipating the choice task as enjoyable can reduce the depleting effect for the first choices but not for many choices.",
"title": ""
},
{
"docid": "3432d7e904f96973522a46934a6ceb82",
"text": "The increase in funding for e-learning is a good move that requires monitoring, as e-learning, like any other information systems, is a long-term investment with uncertainty for returns. This necessitates the need for e-learning evaluation to determine post adoption success. The evaluation of e-learning is a complex process, that requires comprehensive tools for measuring post adoption success. Subsequently, leading the study to investigate approaches utilised for performing post adoption e-learning success. Thereafter, proposing an adapted IS success integrated model for measuring post adoption e-learning success in developing country context, specifically, South Africa. A systematic literature review method was adopted to achieve an inductive study. The study objectives influenced the key words employed for searching relevant literature. They also drove the decision to follow thematic analysis method. Through a systematic review the study discovered that the Delone & Mclean IS success model is the most adopted and applied for measuring post adoption e-learning success. The model is also received as an effective tool to comprehensively gauge post adoption e-learning success. Additionally, the model is flexible and modifiable to suite different contexts. The study suggests further investigation and use of the adapted IS success model within HEIs to comprehensively assess post adoption e-learning success in South Africa. Future studies should be focused on conducting in-depth literature review on approaches utilised for assessing e-learning success post adoption. They should also do testing to determine the suitable process flow and constructs of an evaluation model for developing country contexts.",
"title": ""
},
{
"docid": "d4af143e26b122f32697a4ac9973d748",
"text": "The Keivitsansarvi deposit, in northern Finland, is a low-grade dissemination of Ni–Cu sulfides containing 1.3–26.6 g/t PGE. It occurs in the northeastern part of the 2.05 Ga Keivitsa intrusion and is hosted by olivine wehrlite and olivine websterite, metamorphosed at greenschist-facies conditions. The sulfide-mineralized area shows variable bulk S, Ni, Co, Cu, PGE, Au, As, Sb, Se, Te and Bi contents. S and Au tend to decrease irregularly from bottom to top of the deposit, whereas Ni, Ni/Co, PGE, As, Sb, Se, Te and Bi tend to increase. Thus, the upper section of the deposit has low S (<1.5 wt.%) and Au (160 ppb on average), but elevated levels of the PGE (2120 ppb Pt, 1855 ppb Pd on average). Sulfides consist of intergranular, highly disseminated aggregates mainly made up of pentlandite, pyrite, and chalcopyrite (all showing fine intergrowths), as well as nickeline, maucherite and gersdorffite in some samples. Most platinum-group minerals occur as single, minute grains included in silicates (57%) or attached to the grain boundaries of sulfides (36%). Only a few PGM grains (6%) are included in sulfides. Pt minerals (mainly moncheite and sperrylite) are the most abundant PGM, whereas Pd minerals (mainly merenskyite, Pd-rich melonite, kotulskite and sobolevskite) are relatively scarce, and most contain significant amounts of Pt. Whole-rock PGE analyses show a general Pd enrichment with respect to Pt. This discrepancy results from the fact that a major part of Pd is hidden in solid solution in the structure of gersdorffite, nickeline, maucherite and pentlandite. The mineral assemblages and textures of the upper section of the Keivitsansarvi deposit result from the combined effect of serpentinization, hydrothermal alteration and metamorphism of preexisting, low-grade disseminated Ni–Cu ore formed by the intercumulus crystallization of a small fraction of immiscible sulfide melt. Serpentinization caused Ni enrichment of sulfides and preserved the original PGE concentrations of the magmatic mineralization. Later, coeval with greenschist-facies metamorphism, PGE and some As (together with other semimetals) were leached out from other mineralized zones by hydrothermal fluids, probably transported in the form of chloride complexes, and precipitated in discrete Ni–Cu–PGE-rich horizons, as observed in the upper part of the deposit. Metamorphism also caused partial dissolution and redistribution of the sulfide (and arsenide) aggregates, contributing to a further Ni enrichment in the sulfide ores.",
"title": ""
},
{
"docid": "9ae0daf70e2389f2924f5568d74a9df5",
"text": "The paper describes the CAp 2017 challenge. The challenge concerns the problem of Named Entity Recognition (NER) for tweets written in French. We first present the data preparation steps we followed for constructing the dataset released in the framework of the challenge. We begin by demonstrating why NER for tweets is a challenging problem especially when the number of entities increases. We detail the annotation process and the necessary decisions we made. We provide statistics on the inter-annotator agreement, and we conclude the data description part with examples and statistics for the data. We, then, describe the participation in the challenge, where 8 teams participated, with a focus on the methods employed by the challenge participants and the scores achieved in terms of F1 measure. Importantly, the constructed dataset comprising ∼6,000 tweets annotated for 13 types of entities, which to the best of our knowledge is the first such dataset in French, is publicly available at http://cap2017.imag.fr/competition.html .",
"title": ""
},
{
"docid": "03b08a01be48aaa76684411b73e5396c",
"text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.",
"title": ""
},
{
"docid": "c26667ae2ee8dbbf4743a70e9826667e",
"text": "Two studies compared college students’ interpersonal interaction online, face-to-face, and on the telephone. A communication diary assessed the relative amount of social interactions college students conducted online compared to face-to-face conversation and telephone calls. Results indicated that while the internet was integrated into college students’ social lives, face-to-face communication remained the dominant mode of interaction. Participants reported using the internet as often as the telephone. A survey compared reported use of the internet within local and long distance social circles to the use of other media within those circles, and examined participants’ most recent significant social interactions conducted across media in terms of purposes, contexts, and quality. Internet interaction was perceived as high in quality, but slightly lower than other media. Results were compared to previous conceptualizations of the roles of internet in one’s social life. new media & society Copyright © 2004 SAGE Publications London, Thousand Oaks, CA and New Delhi Vol6(3):299–318 DOI: 10.1177/1461444804041438 ........................................................................................................................................................................................................................................................",
"title": ""
},
{
"docid": "82779e315cf982b56ed14396603ae251",
"text": "The selection of drain current, inversion coefficient, and channel length for each MOS device in an analog circuit results in significant tradeoffs in performance. The selection of inversion coefficient, which is a numerical measure of MOS inversion, enables design freely in weak, moderate, and strong inversion and facilitates optimum design. Here, channel width required for layout is easily found and implicitly considered in performance expressions. This paper gives hand expressions motivated by the EKV MOS model and measured data for MOS device performance, inclusive of velocity saturation and other small-geometry effects. A simple spreadsheet tool is then used to predict MOS device performance and map this into complete circuit performance. Tradeoffs and optimization of performance are illustrated by the design of three, 0.18-mum CMOS operational transconductance amplifiers optimized for DC, balanced, and AC performance. Measured performance shows significant tradeoffs in voltage gain, output resistance, transconductance bandwidth, input-referred flicker noise and offset voltage, and layout area.",
"title": ""
},
{
"docid": "8a478da1c2091525762db35f1ac7af58",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "bef317c450503a7f2c2147168b3dd51e",
"text": "With the development of the Internet of Things (IoT) and the usage of low-powered devices (sensors and effectors), a large number of people are using IoT systems in their homes and businesses to have more control over their technology. However, a key challenge of IoT systems is data protection in case the IoT device is lost, stolen, or used by one of the owner's friends or family members. The problem studied here is how to protect the access to data of an IoT system. To solve the problem, an attribute-based access control (ABAC) mechanism is applied to give the system the ability to apply policies to detect any unauthorized entry. Finally, a prototype was built to test the proposed solution. The evaluation plan was applied on the proposed solution to test the performance of the system.",
"title": ""
},
{
"docid": "0a2ad953e83268b1dde1ba1598190414",
"text": "This paper looks at the challenges and opportunities of implementing blockchain technology across banking, providing food for thought about the potentialities of this disruptive technology. The blockchain technology can optimize the global financial infrastructure, achieving sustainable development, using more efficient systems than at present. In fact, many banks are currently focusing on blockchain technology to promote economic growth and accelerate the development of green technologies. In order to understand the potential of blockchain technology to support the financial system, we studied the actual performance of the Bitcoin system, also highlighting its major limitations, such as the significant energy consumption due to the high computing power required, and the high cost of hardware. We estimated the electrical power and the hash rate of the Bitcoin network, over time, and, in order to evaluate the efficiency of the Bitcoin system in its actual operation, we defined three quantities: “economic efficiency”, “operational efficiency”, and “efficient service”. The obtained results show that by overcoming the disadvantages of the Bitcoin system, and therefore of blockchain technology, we could be able to handle financial processes in a more efficient way than under the current system.",
"title": ""
}
] |
scidocsrr
|
13ae5fe80746041ff521746e03a8c047
|
Effect of Facebook on the life of Medical University students
|
[
{
"docid": "d94d0db91e65bde2b1918ca95cc275bb",
"text": "This study was undertaken to investigate the positive and negative effects of excessive Internet use on undergraduate students. The Internet Effect Scale (IES), especially constructed by the authors to determine these effects, consisted of seven dimensions namely: behavioral problems, interpersonal problems, educational problems, psychological problems, physical problems, Internet abuse, and positive effects. The sample consisted of 200 undergraduate students studying at the GC University Lahore, Pakistan. A set of Pearson Product Moment correlations showed positive associations between time spent on the Internet and various dimensions of the IES indicating that excessive Internet use can lead to a host of problems of educational, physical, psychological and interpersonal nature. However, a greater number of students reported positive than negative effects of Internet use. Without negating the advantages of Internet, the current findings suggest that Internet use should be within reasonable limits focusing more on activities enhancing one's productivity.",
"title": ""
},
{
"docid": "1056326e07199296b63d1ea677e2f295",
"text": "BACKGROUND\nDepression is common and frequently undiagnosed among college students. Social networking sites are popular among college students and can include displayed depression references. The purpose of this study was to evaluate college students' Facebook disclosures that met DSM criteria for a depression symptom or a major depressive episode (MDE).\n\n\nMETHODS\nWe selected public Facebook profiles from sophomore and junior undergraduates and evaluated personally written text: \"status updates.\" We applied DSM criteria to 1-year status updates from each profile to determine prevalence of displayed depression symptoms and MDE criteria. Negative binomial regression analysis was used to model the association between depression disclosures and demographics or Facebook use characteristics.\n\n\nRESULTS\nTwo hundred profiles were evaluated, and profile owners were 43.5% female with a mean age of 20 years. Overall, 25% of profiles displayed depressive symptoms and 2.5% met criteria for MDE. Profile owners were more likely to reference depression, if they averaged at least one online response from their friends to a status update disclosing depressive symptoms (exp(B) = 2.1, P <.001), or if they used Facebook more frequently (P <.001).\n\n\nCONCLUSION\nCollege students commonly display symptoms consistent with depression on Facebook. Our findings suggest that those who receive online reinforcement from their friends are more likely to discuss their depressive symptoms publicly on Facebook. Given the frequency of depression symptom displays on public profiles, social networking sites could be an innovative avenue for combating stigma surrounding mental health conditions or for identifying students at risk for depression.",
"title": ""
}
] |
[
{
"docid": "112a1483acf7fae119036ea231fcbe85",
"text": "Part of the long lasting cultural heritage of China is the classical ancient Chinese poems which follow strict formats and complicated linguistic rules. Automatic Chinese poetry composition by programs is considered as a challenging problem in computational linguistics and requires high Artificial Intelligence assistance, and has not been well addressed. In this paper, we formulate the poetry composition task as an optimization problem based on a generative summarization framework under several constraints. Given the user specified writing intents, the system retrieves candidate terms out of a large poem corpus, and then orders these terms to fit into poetry formats, satisfying tonal and rhythm requirements. The optimization process under constraints is conducted via iterative term substitutions till convergence, and outputs the subset with the highest utility as the generated poem. For experiments, we perform generation on large datasets of 61,960 classic poems from Tang and Song Dynasty of China. A comprehensive evaluation, using both human judgments and ROUGE scores, has demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "0db229bd2dfd325c0f23bc9437141e69",
"text": "The emergence of Infrastructure as a Service framework brings new opportunities, which also accompanies with new challenges in auto scaling, resource allocation, and security. A fundamental challenge underpinning these problems is the continuous tracking and monitoring of resource usage in the system. In this paper, we present ATOM, an efficient and effective framework to automatically track, monitor, and orchestrate resource usage in an Infrastructure as a Service (IaaS) system that is widely used in cloud infrastructure. We use novel tracking method to continuously track important system usage metrics with low overhead, and develop a Principal Component Analysis (PCA) based approach to continuously monitor and automatically find anomalies based on the approximated tracking results. We show how to dynamically set the tracking threshold based on the detection results, and further, how to adjust tracking algorithm to ensure its optimality under dynamic workloads. Lastly, when potential anomalies are identified, we use introspection tools to perform memory forensics on VMs guided by analyzed results from tracking and monitoring to identify malicious behavior inside a VM. We demonstrate the extensibility of ATOM through virtual machine (VM) clustering. The performance of our framework is evaluated in an open source IaaS system.",
"title": ""
},
{
"docid": "b15dcda2b395d02a2df18f6d8bfa3b19",
"text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.",
"title": ""
},
{
"docid": "7029d1f66732c45816ce9b7b5554f884",
"text": "The most critical problem in the world is to meet the energy demand, because of steadily increasing energy consumption. Refrigeration systems` electricity consumption has big portion in overall consumption. Therefore, considerable attention has been given to refrigeration capacity modulation system in order to decrease electricity consumption of these systems. Capacity modulation is used to meet exact amount of load at partial load and lowered electricity consumption by avoiding over capacity using. Variable speed refrigeration systems are the most common capacity modulation method for commercially and household purposes. Although the vapor compression refrigeration designed to satisfy the maximum load, they work at partial load conditions most of their life cycle and they are generally regulated as on/off controlled. The experimental chiller system contains four main components: compressor, condenser, expansion device, and evaporator in Fig.1 where this study deals with effects of different control methods on variable speed compressor (VSC) and electronic expansion valve (EEV). This chiller system has a scroll type VSC and a stepper motor controlled EEV.",
"title": ""
},
{
"docid": "44294f18f19210a2a1f424df249659a6",
"text": "Practitioners and academics are eager on measuring service quality accurately in order to have better understanding of its indispensable antecedent and consequences, and eventually ascertain methods for improving and measuring service quality in search for competitive advantage. The aim of this study is to rank the dimensions of service quality that affect the customerspsila expectation in online purchasing in Iran from the customerspsila perspective. A questionnaire used in this study was published in Cloob.com which is an Iranian virtual society Web site. The measurements used were based on the widely accepted SERVQUAL model which is the most common method for measuring service quality. This study also examined the service quality gap by comparing customerspsila expectations and their actual perceptions. The results of the study indicated that all of the service quality factors are important. Tangibility was rated as the most important dimension followed by assurance, reliability, responsiveness, and empathy.",
"title": ""
},
{
"docid": "81312e4811dfce560ced2e2840953e59",
"text": "A method for automatically assessing the quality of retinal images is presented. It is based on the idea that images of good quality possess some common features that should help define a model of what a good ophthalmic image is. The proposed features are the histogram of the edge magnitude distribution in the image as well as the local histograms of pixel gray-scale values. Histogram matching functions are proposed and experiments show that these features help discriminate between good and bad images.",
"title": ""
},
{
"docid": "0080aa23209d70192bb13b9451082803",
"text": "This paper studies the problem of secret-message transmission over a wiretap channel with correlated sources in the presence of an eavesdropper who has no source observation. A coding scheme is proposed based on a careful combination of 1) Wyner-Ziv's source coding to generate secret key from correlated sources based on a certain cost on the channel, 2) one-time pad to secure messages without additional cost, and 3) Wyner's secrecy coding to achieve secrecy based on the advantage of legitimate receiver's channel over the eavesdropper's. The work sheds light on optimal strategies for practical code design for secure communication/storage systems.",
"title": ""
},
{
"docid": "ab4cada23ae2142e52c98a271c128c58",
"text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.",
"title": ""
},
{
"docid": "0b5fa95d269c48a62437997882c1dead",
"text": "The increased proliferation of data production technologies (e.g., cameras) and consumption avenues (e.g., social media) has led to images and videos being utilized by users to convey innate preferences and tastes. This has opened up the possibility of using multimedia as a source for user-modeling. This work attempts to model personality traits (based on the Five Factor Theory) of users using a collection of images they tag as ‘favorite’ (or like) on Flickr. First, a set of semantic features are proposed to be used for representing different concepts in images which influence users to like them. The addition of the proposed features led to improvement over state-of-the-art by 12 percent. Second, a novel machine learning approach is developed to model users’ personality based on the image features (resulting in upto 15 percent improvement). Third, efficacy of the semantic features and the modeling approach is shown in recommending images based on personality modeling. Using the modeling approach, recommendations are made regarding the factors that might influence users with different personality traits to like an image.",
"title": ""
},
{
"docid": "898ff77dbfaf00efa3b08779a781aa0b",
"text": "The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.",
"title": ""
},
{
"docid": "6d75fc5b57df4f4b497e550c9bd4d14b",
"text": "A highly-digital clock multiplication architecture that achieves excellent jitter and mitigates supply noise is presented. The proposed architecture utilizes a calibration-free digital multiplying delay-locked loop (MDLL) to decouple the tradeoff between time-to-digital converter (TDC) resolution and oscillator phase noise in digital phase-locked loops (PLLs). Both reduction in jitter accumulation down to sub-picosecond levels and improved supply noise rejection over conventional PLL architectures is demonstrated with low power consumption. A digital PLL that employs a 1-bit TDC and a low power regulator that seeks to improve supply noise immunity without increasing loop delay is presented and used to compare with the proposed MDLL. The prototype MDLL and DPLL chips are fabricated in a 0.13 μm CMOS technology and operate from a nominal 1.1 V supply. The proposed MDLL achieves an integrated jitter of 400 fs rms at 1.5 GHz output frequency from a 375 MHz reference clock, while consuming 890 μ W. The worst-case supply noise sensitivity of the MDLL is 20 fspp/mVpp which translates to a jitter degradation of 3.8 ps in the presence of 200 mV supply noise. The proposed clock multipliers occupy active die areas of 0.25 mm2 and 0.2 mm2 for the MDLL and DPLL, respectively.",
"title": ""
},
{
"docid": "5e6c16c5d65d855eaf60aa2295bab5f5",
"text": "The objective of positive education is not only to improve students' well-being but also their academic performance. As an important concept in positive education, growth mindset refers to core assumptions about the malleability of a person's intellectual abilities. The present study investigates the relation of growth mindsets to psychological well-being and school engagement. The study also explores the mediating function of resilience in this relation. We recruited a total of 1260 (658 males and 602 females) Chinese students from five diversified primary and middle schools. Results from the structural equation model show that the development of high levels of growth mindsets in students predicts higher psychological well-being and school engagement through the enhancement of resilience. The current study contributes to our understanding of the potential mechanisms by which positive education (e.g., altering the mindset of students) can impact psychological well-being and school engagement.",
"title": ""
},
{
"docid": "7754aa9e4978b28c00a739d4918e3b3a",
"text": "This paper considers two dimensional valence-arousal model. Pictorial stimuli of International Affective Picture Systems were chosen for emotion elicitation. Physiological signals like, Galvanic Skin Response, Heart Rate, Respiration Rate and Skin Temperature were measured for accessing emotional responses. The experimental procedure uses non-invasive sensors for signal collection. A group of healthy volunteers was shown four types of emotional stimuli categorized as High Valence High Arousal, High Valence Low Arousal, Low Valence High Arousal and Low Valence Low Arousal for around thirty minutes for emotion elicitation. Linear and Quadratic Discriminant Analysis are used and compared to the emotional class classification. Classification of stimuli into one of the four classes has been attempted on the basis of measurements on responses of experimental subjects. If classification is restricted within the responses of a specific individual, the classification results show high accuracy. However, if the problem is extended to entire population, the accuracy drops significantly.",
"title": ""
},
{
"docid": "5d002ab84e1a6034d2751f0807d914ac",
"text": "We live in a world with a population of more than 7.1 Billion, have we ever imagine how many Leaders do we have? Yes, most of us are followers; we live in a world where we follow what have been commanded. The intension of this paper is to equip everyone with some knowledge to know how we can identify who leaders are, are you one of them, and how can we help our-selves and other develop leadership qualities. The Model highlights various traits which are very necessary for leadership. This paper have been investigate and put together after probing almost 30 other research papers. The Principal result we arrived on was that the major/ essential traits which are identified in a Leader are Honesty, Integrity, Drive (Achievement, Motivation, Ambition, Energy, Tenacity and Initiative), Self Confidence, Vision and Cognitive Ability. The Key finding also says that the people with such qualities are not necessary to be in politics, but they are from various walks of life such as major organization, different culture, background, education and ethnicities. Also we found out that just possessing of such traits alone does not guarantee one leadership success as evidence shows that effective leaders are different in nature from most of the other people in certain key respects. So, let us go through the paper to enhance out our mental abilities to search for the Leaders out there.",
"title": ""
},
{
"docid": "dd4820b9c90ea6e6bb4e40566396c0d1",
"text": "Vision is a common source of inspiration for poetry. The objects and the sentimental imprints that one perceives from an image may lead to various feelings depending on the reader. In this paper, we present a system of poetry generation from images to mimic the process. Given an image, we first extract a few keywords representing objects and sentiments perceived from the image. These keywords are then expanded to related ones based on their associations in human written poems. Finally, verses are generated gradually from the keywords using recurrent neural networks trained on existing poems. Our approach is evaluated by human assessors and compared to other generation baselines. The results show that our method can generate poems that are more artistic than the baseline methods. This is one of the few attempts to generate poetry from images. By deploying our proposed approach, XiaoIce has already generated more than 12 million poems for users since its release in July 2017. A book of its poems has been published by Cheers Publishing, which claimed that the book is the first-ever poetry collection written by an AI in human history.",
"title": ""
},
{
"docid": "b856143940b19888422c0c8bf5a3b441",
"text": "Most statistical machine translation systems use phrase-to-phrase translations to capture local context information, leading to better lexical choice and more reliable local reordering. The quality of the phrase alignment is crucial to the quality of the resulting translations. Here, we propose a new phrase alignment method, not based on the Viterbi path of word alignment models. Phrase alignment is viewed as a sentence splitting task. For a given spitting of the source sentence (source phrase, left segment, right segment) find a splitting for the target sentence, which optimizes the overall sentence alignment probability. Experiments on different translation tasks show that this phrase alignment method leads to highly competitive translation results.",
"title": ""
},
{
"docid": "5657391a3b48b34290f49db4358c72a2",
"text": "Ahmad SHAYAN Chief Research Scientist ARRB Transport Research Vermont South Vic Aust Ahmad Shayan is a Chief Scientist at ARRB Transport Research. He has over 22 years of experience in the assessment of concrete deterioration and its prevention, and also utilisation of waste materials in concrete and recycling. He has published around 110 papers and written 185 technical reports on these issues. He is a member of several national and international committees.",
"title": ""
},
{
"docid": "47c7c12e6a04abd668fc80758b7aa5a6",
"text": "File system checkers (like e2fsck) are critical, complex, and hard to develop, and developers today rely on hand-written tests to exercise this intricate code. Test suites for file system checkers take a lot of effort to develop and require careful reasoning to cover a sufficiently comprehensive set of inputs and recovery mechanisms. We present a tool and methodology for testing file system checkers that reduces the need for a specification of the recovery process and the development of a test suite. Our methodology splits the correctness of the checker into two objectives: consistency and completeness of recovery. For each objective, we leverage either the file system checker code itself or a comparison among the outputs of multiple checkers to extract an implicit specification of correct behavior. Our methodology is embodied in a testing tool called SWIFT, which uses a mix of symbolic and concrete execution; it introduces two new techniques: a specific concretization strategy and a corruption model that leverages test suites of file system checkers. We used SWIFT to test the file system checkers of ext2, ext3, ext4, ReiserFS, and Minix; we found bugs in all checkers, including cases leading to data loss. Additionally, we automatically generated test suites achieving code coverage on par with manually constructed test suites shipped with the checkers.",
"title": ""
},
{
"docid": "67c74094c42c06d88401ae81b1429956",
"text": "Research, first published over a decade ago, has shown that every 10% increase in the number of registered nurses (RNs) educated with the Bachelor of Science in Nursing (BSN) in hospital staff is associated with a 4 % decrease in the risk of death for patients.' Nurse staffs with higher proportions of BSN and Master of Science in Nursing (MSN) prepared nurses demonstrate increased productivity and better patient outcomes.^-^''''^' ' Therefore, in 2008 the American Nurses Association (ANA) House of Delegates resolved to support initiatives that require new diploma and associate degree (AD) prepared RNs to complete the BSN within ten years after initial licensure, exempting those individuals who are already licensed or enrolled as students in diploma or AD programs when legislation is enacted.' The Ohio Nurses Association (ONA) adopted this resolution in 2009 and the Ohio State Nursing Students'Association (OSNA) has endorsed the BSN in Ten initiative.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
scidocsrr
|
cdfdd0eb4355b6d661da44657e53ea65
|
Perceived Coping as a Mediator Between Attachment and Psychological Distress : A Structural Equation Modeling Approach
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "e7de23a164446a208df5fde7a2a1a2f9",
"text": "Building facade detection is an important problem in comput er vision, with applications in mobile robotics and semanti c scene understanding. In particular, mobile platform localizati on and guidance in urban environments can be enabled with acc urate models of the various building facades in a scene. Toward that end, w e present a system for detection, segmentation, and paramet er estimation of building facades in stereo imagery. The propo sed method incorporates multilevel appearance and dispari ty features in a binary discriminative model, and generates a set of cand id te planes by sampling and clustering points from the imag e with Random Sample Consensus (RANSAC), using local normal estim ates derived from Principal Component Analysis (PCA) to inf rm the planar models. These two models are incorporated into a t w -layer Markov Random Field (MRF): an appearanceand disp ar tybased discriminative classifier at the mid-level, and a geom etric model to segment the building pixels into facades at th e highlevel. By using object-specific stereo features, our discri minative classifier is able to achieve substantially higher accuracy than standard boosting or modeling with only appearance-based f eatures. Furthermore, the results of our MRF classification indicate a strong improvement in accuracy for the binary building dete ction problem and the labeled planar surface models provide a good approximation to the ground truth planes.",
"title": ""
},
{
"docid": "ec9fa7d2b0833d1b2f9fb9c7e0d3f350",
"text": "Our goal in this paper is to explore two generic approaches to disrupting dark networks: kinetic and nonkinetic. The kinetic approach involves aggressive and offensive measures to eliminate or capture network members and their supporters, while the non-kinetic approach involves the use of subtle, non-coercive means for combating dark networks. Two strategies derive from the kinetic approach: Targeting and Capacity-building. Four strategies derive from the non-kinetic approach: Institution-Building, Psychological Operations, Information Operations and Rehabilitation. We use network data from Noordin Top’s South East Asian terror network to illustrate how both kinetic and non-kinetic strategies could be pursued depending on a commander’s intent. Using this strategic framework as a backdrop, we strongly advise the use of SNA metrics in developing alterative counter-terrorism strategies that are contextdependent rather than letting SNA metrics define and drive a particular strategy.",
"title": ""
},
{
"docid": "a5aa074c27add29fd038a83f02582fd1",
"text": "We develop an efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.",
"title": ""
},
{
"docid": "56dda298f1033dc3bd381d525678b904",
"text": "This study was undertaken to characterize functions of the outer membrane protein OmpW, which potentially contributes to the development of colistin- and imipenem-resistance in Acinetobacter baumannii. Reconstitution of OmpW in artificial lipid bilayers showed that it forms small channels (23 pS in 1 m KCl) and markedly interacts with iron and colistin, but not with imipenem. In vivo, (55) Fe uptake assays comparing the behaviours of ΔompW mutant and wild-type strains confirmed a role for OmpW in A. baumannii iron homeostasis. However, the loss of OmpW expression did not have an impact on A. baumannii susceptibilities to colistin or imipenem.",
"title": ""
},
{
"docid": "6578251b7902beb0baa71a3f2248d659",
"text": "The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and datasets.",
"title": ""
},
{
"docid": "571f07c7c8ba724d3e266788e5dac622",
"text": "The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM technology is experiencing difficult technology scaling challenges that make the maintenance and enhancement of its capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this paper, after describing the demands and challenges faced by the memory system, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we survey three key solution directions: 1) enabling new DRAM architectures, functions, interfaces, and better integration of the DRAM and the rest of the system, 2) designing a memory system that employs emerging memory technologies and takes advantage of multiple different technologies, 3) providing predictable performance and QoS to applications sharing the memory system. We also briefly describe our ongoing related work in combating scaling challenges of NAND flash memory.",
"title": ""
},
{
"docid": "68b38404198f2360c9fa9dccf3d49f8e",
"text": "A space-filling curve is a linear traversal of a discrete finite multidimensional space. In order for this traversal to be useful in many applications, the curve should preserve \"locality\". We quantify \"locality\" and bound the locality of multidimensional space-filling curves. Classic Hilbert space-filling curves come close to achieving optimal locality.",
"title": ""
},
{
"docid": "c5c46fb727ff9447ebe75e3625ad375b",
"text": "Plenty of face detection and recognition methods have been proposed and got delightful results in decades. Common face recognition pipeline consists of: 1) face detection, 2) face alignment, 3) feature extraction, 4) similarity calculation, which are separated and independent from each other. The separated face analyzing stages lead the model redundant calculation and are hard for end-to-end training. In this paper, we proposed a novel end-to-end trainable convolutional network framework for face detection and recognition, in which a geometric transformation matrix was directly learned to align the faces, instead of predicting the facial landmarks. In training stage, our single CNN model is supervised only by face bounding boxes and personal identities, which are publicly available from WIDER FACE [36] dataset and CASIA-WebFace [37] dataset. Tested on Face Detection Dataset and Benchmark (FDDB) [11] dataset and Labeled Face in the Wild (LFW) [9] dataset, we have achieved 89.24% recall for face detection task and 98.63% verification accuracy for face recognition task simultaneously, which are comparable to state-of-the-art results.",
"title": ""
},
{
"docid": "43cd94df4a686b89ab6ca5e2782f5a54",
"text": "Relational databases scattered over the web are generally opaque to regular web crawling tools. To address this concern, many RDB-to-RDF approaches have been proposed over the last years. In this paper, we propose a detailed review of seventeen RDB-to-RDF initiatives, considering end-to-end projects that delivered operational tools. The different tools are classified along three major axes: mapping description language, mapping implementation and data retrieval method. We analyse the motivations, commonalities and differences between existing approaches. The expressiveness of existing mapping languages is not always sufficient to produce semantically rich data and make it usable, interoperable and linkable. We therefore briefly present various strategies investigated in the literature to produce additional knowledge. Finally, we show that R2RML, the W3C recommendation for describing RDB to RDF mappings, may not apply to all needs in the wide scope of RDB to RDF translation applications, leaving space for future extensions.",
"title": ""
},
{
"docid": "d69571c1614c3a078d36467d91a09bc6",
"text": "In many species of oviparous reptiles, the first steps of gonadal sex differentiation depend on the incubation temperature of the eggs. Feminization of gonads by exogenous oestrogens at a male-producing temperature and masculinization of gonads by antioestrogens and aromatase inhibitors at a female-producing temperature have irrefutably demonstrated the involvement of oestrogens in ovarian differentiation. Nevertheless, several studies performed on the entire gonad/adrenal/mesonephros complex failed to find differences between male- and female-producing temperatures in oestrogen content, aromatase activity and aromatase gene expression during the thermosensitive period for sex determination. Thus, the key role of aromatase and oestrogens in the first steps of ovarian differentiation has been questioned, and extragonadal organs or tissues, such as adrenal, mesonephros, brain or yolk, were considered as possible targets of temperature and sources of the oestrogens acting on gonadal sex differentiation. In disagreement with this view, experiments and assays carried out on the gonads alone, i.e. separated from the adrenal/mesonephros, provide evidence that the gonads themselves respond to temperature shifts by modifying their sexual differentiation and are the site of aromatase activity and oestrogen synthesis during the thermosensitive period. Oestrogens act locally on both the cortical and the medullary part of the gonad to direct ovarian differentiation. We have concluded that there is no objective reason to search for the implication of other organs in the phenomenon of temperature-dependent sex determination in reptiles. From the comparison with data obtained in other vertebrates, we propose two main directions for future research: to examine how transcription of the aromatase gene is regulated and to identify molecular and cellular targets of oestrogens in gonads during sex differentiation, in species with strict genotypic sex determination and species with temperature-dependent sex determination.",
"title": ""
},
{
"docid": "a5a586966fc5622fd871ce1a05298863",
"text": "Churning is the movement of customers from a company to another. For any company, being able to predict with some time which of their customers will churn is essential to take actions in order to retain them, and for this reason most sectors invest substantial effort in techniques for (semi)automatically predicting churning, and data mining and machine learning are among the techniques successfully used to this effect. In this paper we describe a prototype for churn prediction using stream mining methods, which offer the additional promise of detecting new patterns of churn in real-time streams of high-speed data, and adapting quickly to a changing reality. The prototype is implemented on top of the MOA (Massive Online Analysis) framework for stream mining. The application implicit in the prototype is the telecommunication operator (mobile phone) sector. A shorter version of this paper, omitting Section 5, was presented at CCIA’13 (http://mon.uvic.cat/ccia2013/en/).",
"title": ""
},
{
"docid": "c09adc1924c9c1b32c33b23d9df489b9",
"text": "In recent years, “document store” NoSQL systems have exploded in popularity. A large part of this popularity has been driven by the adoption of the JSON data model in these NoSQL systems. JSON is a simple but expressive data model that is used in many Web 2.0 applications, and maps naturally to the native data types of many modern programming languages (e.g. Javascript). The advantages of these NoSQL document store systems (like MongoDB and CouchDB) are tempered by a lack of traditional RDBMS features, notably a sophisticated declarative query language, rich native query processing constructs (e.g. joins), and transaction management providing ACID safety guarantees. In this paper, we investigate whether the advantages of the JSON data model can be added to RDBMSs, gaining some of the traditional benefits of relational systems in the bargain. We present Argo, an automated mapping layer for storing and querying JSON data in a relational system, and NoBench, a benchmark suite that evaluates the performance of several classes of queries over JSON data in NoSQL and SQL databases. Our results point to directions of how one can marry the best of both worlds, namely combining the flexibility of JSON to support the popular document store model with the rich query processing and transactional properties that are offered by traditional relational DBMSs.",
"title": ""
},
{
"docid": "a7e35f3dec01d0ae7d15b02ec0ea7bee",
"text": "Both generative adversarial networks (GAN) in unsupervised learning and actorcritic methods in reinforcement learning (RL) have gained a reputation for being difficult to optimize. Practitioners in both fields have amassed a large number of strategies to mitigate these instabilities and improve training. Here we show that GANs can be viewed as actor-critic methods in an environment where the actor cannot affect the reward. We review the strategies for stabilizing training for each class of models, both those that generalize between the two and those that are particular to that model. We also review a number of extensions to GANs and RL algorithms with even more complicated information flow. We hope that by highlighting this formal connection we will encourage both GAN and RL communities to develop general, scalable, and stable algorithms for multilevel optimization with deep networks, and to draw inspiration across communities.",
"title": ""
},
{
"docid": "e8c9067f13c9a57be46823425deb783b",
"text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.",
"title": ""
},
{
"docid": "0d11c7f94973be05d906f94238d706e4",
"text": "Head-Mounted Displays (HMDs) combined with 3-or-more Degree-of-Freedom (DoF) input enable rapid manipulation of stereoscopic 3D content. However, such input is typically performed with hands in midair and therefore lacks precision and stability. Also, recent consumer-grade HMDs suffer from limited angular resolution and/or limited field-of-view as compared to a desktop monitor. We present the DualCAD system that implements two solutions to these problems. First, the user may freely switch at runtime between an augmented reality HMD mode, and a traditional desktop mode with precise 2D mouse input and an external desktop monitor. Second, while in the augmented reality HMD mode, the user holds a smartphone in their non-dominant hand that is tracked with 6 DoF, allowing it to be used as a complementary high-resolution display as well as an alternative input device for stylus or multitouch input. Two novel bimanual interaction techniques that leverage the properties of the smartphone are presented. We also report initial user feedback.",
"title": ""
},
{
"docid": "69dce8bea305f4a0d6fabe7846d6ff22",
"text": "This study aims to examine the satisfied and unsatisfied of hotel customers by utilizing a word cloud approach to evaluate online reviews. As a pilot test, online commends of 1,752 hotel guests were collected from TripAdvisor.com for 5 selected hotels in Chiang Mai, Thailand. The research results revealed some common features that are identified in both satisfied and dissatisfied of customer reviews; including staff service skills, hotel environment and facilities and a quality of room and bathroom. On the other hand, the findings shown that dissatisfied customers pointed out more frequently on the booking systems of the hotel. Therefore, this article's results suggests some clearer managerial implications pertaining to understanding of customer satisfaction level through the utilization of world cloud technique via review online platforms.",
"title": ""
},
{
"docid": "05eeadabcb4b7599e8bbcee96f0147eb",
"text": "Convolutional Neural Network(CNN) becomes one of the most preferred deep learning method because of achieving superior success at solution of important problems of machine learning like pattern recognition, object recognition and classification. With CNN, high performance has been obtained in traffic sign recognition which is important for autonomous vehicles. In this work, two-stage hierarchical CNN structure is proposed. Signs are seperated into 9 main groups at the first stage by using structure similarity index. And then classes of each main group are subclassed with CNNs at the second stage. Performance of the network is measured on 43-classes GTSRB dataset and compared with other methods.",
"title": ""
},
{
"docid": "7dcdf69f47a0a56d437cc8b7ea5352a6",
"text": "A wide range of domain-specific languages (DSLs) has been implemented successfully by embedding them in general purpose languages. This paper reviews embedding, and summarizes how two alternative techniques—staged interpreters and templates—can be used to overcome the limitations of embedding. Both techniques involve a form of generative programming. The paper reviews and compares three programming languages that have special support for generative programming. Two of these languages (MetaOCaml and Template Haskell) are research languages, while the third (C++) is already in wide industrial use. The paper identifies several dimensions that can serve as a basis for comparing generative languages.",
"title": ""
},
{
"docid": "878b973287cea4faecb0557988c130c6",
"text": "We explore whether Linear Genetic Programming (LGP) can evolve a C/C++ computer simulation model that accurately models the performance of a waste incinerator. Human expert written simulation models are used worldwide in a variety of industrial and business applications. They are expensive to develop, may or may not be valid for the specific process that is being modeled, and may be erroneous. LGP is a machine learning technique that uses information about a process’s inputs and outputs to simultaneously write the simulation model, calibrate and optimize the model’s constants, and validate the solution. The result is a calibrated, validated, error-free C/C++ computer model specific to the desired process. To evaluate whether this is feasible for complex industrial processes, the method on data obtained from the operation of a hazardous waste incinerator. This process is difficult to model. Previously, in a well-conducted study, the popular machine learning technique, analytic neural networks, was unable to derive useful solutions to this problem. The present study uses various mutation rates (95%, 50%, and 10%), 10 random initial seeds per mutation rate, and a large number of generations (1,280 to 4,461). The LGP system provided accurate solutions to this problem with a validation data measure of fitness, R, equal to 0.961. This work demonstrates the value of LGP for process simulation. The study confirms previously published results and found that the distribution of outputs from multiple genetic programming (GP) runs tends to include an extended “tail” of outstanding solutions. Such a tail was not found in previous studies of neural networks. This result emphasizes the need for employing a strategy of multiple runs using various initial seeds and mutation rates to find good solutions to complex problems using LGP. This result also demonstrates the value of a fast LGP algorithm implemented at the machine code level for both static scientific data mining and real-time process control. The work consumed 600 hours of CPU time; it is estimated that other GP algorithms would have required between 4 and 136 years of CPU time to achieve similar results. INTRODUCTION With the increasing complexity of modern manufacturing [Popovic98] and processes [Popovic90], industries require fast techniques for adaptive real-time control [Francone00a]. Today, many industries allocate in excess of 10% of their plant investment capital outlays for instrumentation and control [Murrill00]. This percentage has doubled over the past 30 years and shows no signs of diminishing. The industrial processes often are non-linear and the mathematical representation of the process is unknown [Sinha00]. Hence, simulation models are not available for many of the processes that exist today. Without simulation models, optimal process control is very difficult to achieve. This results in unnecessary waste of resources. GP is a promising machine learning technology that has been the subject of intensive academic research since 1988. Since 1998, commercial applications have been made available on the market [Francone00b]. GP can be used to automatically develop a simulation model of complex industrial processes. In this work, we specifically examine whether a very fast form of GP, LGP can evolve a C/C++ computer simulation model that simulates the behavior of the concentrations of carbon dioxide in a waste incinerator. Waste incineration was chosen as a test case because it is a very complex process that involves variable input material properties (i.e., solids, liquids, and gaseous), high temperature, large temperature variations, variable-input energy sources, compressible gas flow, density effects etc. The simulated variable, the concentration of carbon dioxide in the secondary combustion chamber, varies widely from 0 to 5,000 parts per million (ppm). This process has been demonstrated resistant to solution by machine learning via the analytical neural network technique (ANN), as evidenced by a well-conducted study [Fausett00]. Despite the complexity of the incineration processes, developing a computer simulation code that maps the process variables to the carbon dioxide emission concentration should at least be feasible using GP. The strength of GP is its ability to abstract an underlying principle from a finite set of fitness cases [Banzhaf98, Koza99]. This principle can be considered the essence of the regularities that determine the appearance of concrete fitness cases. GP evolves both the structure and the constants of the solution simultaneously, the goal being to extract these Published at the Society for Computer Simulation's Advanced Technology Simulation Conference, Seattle, WA April, 2001 regularities and put them under the form of an algorithm or computer program, which then represents a simulation model. THE GENETIC PROGRAMMING ALGORITHM",
"title": ""
},
{
"docid": "589396a7c9dae0567f0bcd4d83461a6f",
"text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.",
"title": ""
}
] |
scidocsrr
|
7a0f62907aa81d85d6c10fea67548d64
|
Shared Embedding Based Neural Networks for Knowledge Graph Completion
|
[
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
}
] |
[
{
"docid": "bd7664e9ff585a48adca12c0a8d9bf95",
"text": "Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations.\n To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy/quality trade-offs.\n We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90% depending on the scenario) compared to a representative collection of existing approaches.",
"title": ""
},
{
"docid": "175fa180bc18a59dd6855d469aed91ec",
"text": "A new solution of the inverse kinematics task for a 3-DOF parallel manipulator with a R-P -S joint structure is obtained for a given position of end-effector in the form of simple position equations. Based on this the number of the inverse kinematics task solutions was investigated, in general, equal to four. We identify the size of the manipulator feasible area and simple relationships are found between the position and orientation of the platform. We prove a new theorem stating that, while the end-effector traces a circular horizontal path with its centre at the vertical z-axis, the norm of the joint coordinates vector remains constant.",
"title": ""
},
{
"docid": "c2c5f0f8b4647c651211b50411382561",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "e003dd850e8ca294a45e2bec122945c3",
"text": "In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level optimization approach to solve the model selection problem for linear and kernel SVMs, including the extension to learn several kernel parameters. Using this method, we can overcome the discretization of the parameter space using continuous optimization, and the complexity of the method only increases linearly with the number of parameters (instead of exponentially using grid search). In experiments, we determine optimal hyper-parameters based on different smooth estimates of the cross-validation error and find that only very few iterations of bi-level optimization yield good classification rates.",
"title": ""
},
{
"docid": "15e440bc952db5b0ad71617e509770b9",
"text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.",
"title": ""
},
{
"docid": "0b7f00dcdfdd1fe002b2363097914bba",
"text": "A new field of research, visual analytics, has been introduced. This has been defined as \"the science of analytical reasoning facilitated by interactive visual interfaces\" (Thomas and Cook, 2005). Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation, and dissemination. As researchers begin to develop visual analytic environments, it is advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work has on the users who work in such environments. This paper presents five areas or aspects of visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as the five evaluation areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined",
"title": ""
},
{
"docid": "1b4963cac3a0c3b0ae469f616b4295a8",
"text": "The volume of traveling websites is rapidly increasing. This makes relevant information extraction more challenging. Several fuzzy ontology-based systems have been proposed to decrease the manual work of a full-text query search engine and opinion mining. However, most search engines are keyword-based, and available full-text search engine systems are still imperfect at extracting precise information using different types of user queries. In opinion mining, travelers do not declare their hotel opinions entirely but express individual feature opinions in reviews. Hotel reviews have numerous uncertainties, and most featured opinions are based on complex linguistic wording (small, big, very good and very bad). Available ontology-based systems cannot extract blurred information from reviews to provide better solutions. To solve these problems, this paper proposes a new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE. The system reformulates the user’s full-text query to extract the user requirement and convert it into the format of a proper classical full-text search engine query. The proposed system retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology. The fuzzy domain ontology, user information and hotel information are integrated to form a type-2 fuzzy merged ontology for the retrieving of feature polarity and individual hotel polarity. The Protégé OWL-2 (Ontology Web Language) tool is used to develop the type-2 fuzzy ontology. A series of experiments were designed and demonstrated that T2FOBOMIE performance is highly productive for analyzing reviews and accurate opinion mining.",
"title": ""
},
{
"docid": "e900869aa26f7825878b394cbeb4bc92",
"text": "One of the central challenges of integrating game-based learning in school settings is helping learners make the connections between the knowledge learned in the game and the knowledge learned at school, while maintaining a high level of engagement with game narrative and gameplay. The current study evaluated the effect of supplementing a business simulation game with an external conceptual scaffold, which introduces formal knowledge representations, on learners’ ability to solve financial-mathematical word problems following the game, and on learners’ perceptions regarding learning, flow, and enjoyment in the game. Participants (Mage 1⁄4 10.10 years) were randomly assigned to three experimental conditions: a “study and play” condition that presented the scaffold first and then the game, a “play and study” condition, and a “play only” condition. Although no significant gains in problem-solving were found following the intervention, learners who studied with the external scaffold before the game performed significantly better in the post-game problem-solving assessment. Adding the external scaffold before the game reduced learners’ perceived learning. However, the scaffold did not have a negative impact on reported flow and enjoyment. Flow was found to significantly predict perceived learning and enjoyment. Yet, perceived learning and enjoyment did not predict problem-solving and flow directly predicted problem solving only in the “play and study” condition. We suggest that presenting the scaffold may have “problematized” learners’ understandings of the game by connecting them to disciplinary knowledge. Implications for the design of scaffolds for game-based learning are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "30f6e87625f9d293824e932b072aa95a",
"text": "This paper presents a method for combining domain knowledge and machine learning (CDKML) for classifier generation and online adaptation. The method exploits advantages in domain knowledge and machine learning as complementary information sources. While machine learning may discover patterns in interest domains that are too subtle for humans to detect, domain knowledge may contain information on a domain not present in the available domain dataset. CDKML has three steps. First, prior domain knowledge is enriched with relevant patterns obtained by machine learning to create an initial classifier. Second, genetic algorithms refine the classifier. Third, the classifier is adapted online based on user feedback using the Markov decision process. CDKML was applied in fall detection. Tests showed that the classifiers developed by CDKML have better performance than ML classifiers generated on a one-sided training dataset. The accuracy of the initial classifier was 10 percentage points higher than the best machine learning classifier and the refinement added 3 percentage points. The online adaptation improved the accuracy of the refined classifier by additional 15 percentage points.",
"title": ""
},
{
"docid": "25d63ac8bdd3bc3c6348566a63aef76c",
"text": "The mammalian intestine is home to a complex community of trillions of bacteria that are engaged in a dynamic interaction with the host immune system. Determining the principles that govern host–microbiota relationships is the focus of intense research. Here, we describe how the intestinal microbiota is able to influence the balance between pro-inflammatory and regulatory responses and shape the host's immune system. We suggest that improving our understanding of the intestinal microbiota has therapeutic implications, not only for intestinal immunopathologies but also for systemic immune diseases.",
"title": ""
},
{
"docid": "27a8a8313b8b5d9b69537a2f6b1cd18a",
"text": "Harmonic functions are solutions to Laplace's Equation. As noted in a previous paper, they can be used to advantage for potentialeld path planning, since they do not exhibit spurious local minima. In this paper, harmonic functions are shown to have a number of other properties (including completeness) which are essential to robotics applications. These properties strongly recommend harmonic functions as a mechanism for robot control.",
"title": ""
},
{
"docid": "8e8905e6ae4c4d6cd07afa157b253da9",
"text": "Blockchain technology enables the execution of collaborative business processes involving untrusted parties without requiring a central authority. Specifically, a process model comprising tasks performed by multiple parties can be coordinated via smart contracts operating on the blockchain. The consensus mechanism governing the blockchain thereby guarantees that the process model is followed by each party. However, the cost required for blockchain use is highly dependent on the volume of data recorded and the frequency of data updates by smart contracts. This paper proposes an optimized method for executing business processes on top of commodity blockchain technology. The paper presents a method for compiling a process model into a smart contract that encodes the preconditions for executing each task in the process using a space-optimized data structure. The method is empirically compared to a previously proposed baseline by replaying execution logs, including one from a real-life business process, and measuring resource consumption.",
"title": ""
},
{
"docid": "d2b6d875326b8147ffea279f1da26fc9",
"text": "This article discusses the psychology of cosmetic surgery. A review of the research on the psychological characteristics of individuals who seek cosmetic surgery yielded contradictory findings. Interview-based investigations revealed high levels of psychopathology in cosmetic surgery patients, whereas studies that used standardized measurements reported far less disturbance. It is difficult to fully resolve the discrepancy between these two sets of findings. We believe that investigating the construct of body image in cosmetic surgery patients will yield more useful findings. Thus, we propose a model of the relationship between body image dissatisfaction and cosmetic surgery and outline a research agenda based upon the model. Such research will generate information that is useful to the medical and mental health communities and, ultimately, the patients themselves.",
"title": ""
},
{
"docid": "316ea13d9bf9a64e71871e22e6073ef6",
"text": "Ride sharing allows to share costs of traveling by car, e.g., for fuel or highway tolls. Furthermore, it reduces congestion and emissions by making better use of vehicle capacities. Ride sharing is hence beneficial for drivers, riders, as well as society. While the concept has existed for decades, ubiquity of digital and mobile technology and user habituation to peer-to-peer services and electronic markets have resulted in particular growth in recent years. This paper explores the novel idea of multi-hop ride sharing and illustrates how Information Systems can leverage its potential. Based on empirical ride sharing data, we provide a quantitative analysis of the structure and the economics of electronic ride sharing markets. We explore the potential and competitiveness of multi-hop ride sharing and analyze its implications for platform operators. We find that multi-hop ride sharing proves competitive against other modes of transportation and has the potential to greatly increase ride availability and city connectedness, especially under high reliability requirements. To fully realize this potential, platform operators should implement multi-hop search, assume active control of pricing and booking processes, improve coordination of transfers, enhance data services, and try to expand their market share.",
"title": ""
},
{
"docid": "ca0d5a3f9571f288d244aee0b2c2f801",
"text": "This paper proposes, focusing on random forests, the increa singly used statistical method for classification and regre ssion problems introduced by Leo Breiman in 2001, to investigate two classi cal issues of variable selection. The first one is to find impor tant variables for interpretation and the second one is more rest rictive and try to design a good prediction model. The main co tribution is twofold: to provide some insights about the behavior of th e variable importance index based on random forests and to pr opose a strategy involving a ranking of explanatory variables usi ng the random forests score of importance and a stepwise asce nding variable introduction strategy.",
"title": ""
},
{
"docid": "46ea64a204ae93855676146d84063c1a",
"text": "PURPOSE\nThe present study examined the utility of 2 measures proposed as markers of specific language impairment (SLI) in identifying specific impairments in language or working memory in school-age children.\n\n\nMETHOD\nA group of 400 school-age children completed a 5-min screening consisting of nonword repetition and sentence recall. A subset of low (n = 52) and average (n = 38) scorers completed standardized tests of language, short-term and working memory, and nonverbal intelligence.\n\n\nRESULTS\nApproximately equal numbers of children were identified with specific impairments in either language or working memory. A group about twice as large had deficits in both language and working memory. Sensitivity of the screening measure for both SLI and specific working memory impairments was 84% or greater, although specificity was closer to 50%. Sentence recall performance below the 10th percentile was associated with sensitivity and specificity values above 80% for SLI.\n\n\nCONCLUSIONS\nDevelopmental deficits may be specific to language or working memory, or include impairments in both areas. Sentence recall is a useful clinical marker of SLI and combined language and working memory impairments.",
"title": ""
},
{
"docid": "a583c568e3c2184e5bda272422562a12",
"text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.",
"title": ""
},
{
"docid": "14c32ad3f68e38d4d1efb22ac32710e7",
"text": "It is known from clinical studies that some patients attempt to cope with the symptoms of post-traumatic stress disorder (PTSD) by using recreational drugs. This review presents a case report of a 19-year-old male patient with a spectrum of severe PTSD symptoms, such as intense flashbacks, panic attacks, and self-mutilation, who discovered that some of his major symptoms were dramatically reduced by smoking cannabis resin. The major part of this review is concerned with the clinical and preclinical neurobiological evidence in order to offer a potential explanation of these effects on symptom reduction in PTSD. This review shows that recent studies provided supporting evidence that PTSD patients may be able to cope with their symptoms by using cannabis products. Cannabis may dampen the strength or emotional impact of traumatic memories through synergistic mechanisms that might make it easier for people with PTSD to rest or sleep and to feel less anxious and less involved with flashback memories. The presence of endocannabinoid signalling systems within stress-sensitive nuclei of the hypothalamus, as well as upstream limbic structures (amygdala), point to the significance of this system for the regulation of neuroendocrine and behavioural responses to stress. Evidence is increasingly accumulating that cannabinoids might play a role in fear extinction and antidepressive effects. It is concluded that further studies are warranted in order to evaluate the therapeutic potential of cannabinoids in PTSD.",
"title": ""
},
{
"docid": "0604c1ed7ea5a57387d013a5f94f8c00",
"text": "Many current Internet services rely on inferences from models trained on user data. Commonly, both the training and inference tasks are carried out using cloud resources fed by personal data collected at scale from users. Holding and using such large collections of personal data in the cloud creates privacy risks to the data subjects, but is currently required for users to benefit from such services. We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks. Specifically, we take an initial model learnt from a small set of users and retrain it locally using data from a single user. We evaluate on two tasks: one supervised learning task, using a neural network to recognise users' current activity from accelerometer traces; and one unsupervised learning task, identifying topics in a large set of documents. In both cases the accuracy is improved. We also analyse the robustness of our approach against adversarial attacks, as well as its feasibility by presenting a performance evaluation on a representative resource-constrained device (a Raspberry Pi).",
"title": ""
}
] |
scidocsrr
|
34669541a189f460d57a81c4c55ac8b6
|
Fine-grained Sentiment Analysis of Chinese Reviews Using LSTM Network
|
[
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "f52cde20377d4b8b7554f9973c220d0a",
"text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.",
"title": ""
}
] |
[
{
"docid": "063295bfa624d5aa09420e17f5d21c4c",
"text": "In this paper, we introduce new methods and discuss results of text-based LSTM (Long Short-Term Memory) networks for automatic music composition. The proposed network is designed to learn relationships within text documents that represent chord progressions and drum tracks in two case studies. In the experiments, word-RNNs (Recurrent Neural Networks) show good results for both cases, while character-based RNNs (char-RNNs) only succeed to learn chord progressions. The proposed system can be used for fully automatic composition or as semiautomatic systems that help humans to compose music by controlling a diversity parameter of the model.",
"title": ""
},
{
"docid": "4a8bf7a4e1596f83f97c08270386fed1",
"text": "Acute unclassified colitis could be the first attack of inflammatory bowel disease, particularly chronic ulcerative colitis or acute non specific colitis regarded as being of infectious origin without recurrence. The aim of this work was to determine the outcome of 104 incidental cases of acute unclassified colitis diagnosed during the year 1988 at a census point made 2.5 to 3 years later and to search for demographic and clinical discriminating data for final diagnosis. Thirteen patients (12.5%) were lost to follow up. Another final diagnosis was made in three other patients: two had salmonellosis and one diverticulosis. Of the remaining 88 patients, 46 (52.3%) relapsed and were subsequently classified as inflammatory bowel disease: 54% ulcerative colitis, 33% Crohn's disease and 13% chronic unclassified colitis. Forty-two (47.7%) did not relapse and were considered to have acute non specific colitis. The mean age at onset was significantly lower in patients with inflammatory bowel disease (32.3 years) than in patients with acute non specific colitis (42.6 years) (P < 0.001). No clinical data (diarrhea, abdominal pain, bloody stool, mucus discharge fever, weight loss) was predictive of the final diagnosis. In this series, 52.3% of patients initially classified as having an acute unclassified colitis had a final diagnosis of inflammatory bowel disease after a 2.5-3 years follow-up. These data warrant a thorough follow up of acute unclassified colitis, especially when it occurs in patients < 40 years.",
"title": ""
},
{
"docid": "120452d49d476366abcb52b86d8110b5",
"text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.",
"title": ""
},
{
"docid": "819195697309e48749e340a86dfc866d",
"text": "For the first time, a single source of cellulosic biomass was pretreated by leading technologies using identical analytical methods to provide comparative performance data. In particular, ammonia explosion, aqueous ammonia recycle, controlled pH, dilute acid, flowthrough, and lime approaches were applied to prepare corn stover for subsequent biological conversion to sugars through a Biomass Refining Consortium for Applied Fundamentals and Innovation (CAFI) among Auburn University, Dartmouth College, Michigan State University, the National Renewable Energy Laboratory, Purdue University, and Texas A&M University. An Agricultural and Industrial Advisory Board provided guidance to the project. Pretreatment conditions were selected based on the extensive experience of the team with each of the technologies, and the resulting fluid and solid streams were characterized using standard methods. The data were used to close material balances, and energy balances were estimated for all processes. The digestibilities of the solids by a controlled supply of cellulase enzyme and the fermentability of the liquids were also assessed and used to guide selection of optimum pretreatment conditions. Economic assessments were applied based on the performance data to estimate each pretreatment cost on a consistent basis. Through this approach, comparative data were developed on sugar recovery from hemicellulose and cellulose by the combined pretreatment and enzymatic hydrolysis operations when applied to corn stover. This paper introduces the project and summarizes the shared methods for papers reporting results of this research in this special edition of Bioresource Technology.",
"title": ""
},
{
"docid": "bd8b7b892060d8099217ef8553c79b71",
"text": "Purpose: The purpose of this study is to examine the barriers that SMEs are experiencing when confronted with the need to implement e-commerce to sustain their competitiveness. E-commerce is the medium that leads to economic growth of a country. Small and Medium Enterprises (SMEs) play an important role in contributing to the Gross Domestic Product and reducing the unemployment. However, there are some specific factors that inhibit the implementation of e-commerce among SMEs. Design/methodology/approach: A questionnaire approach was employed in this study and 160 questionnaires have been distributed but only 91usable questionnaires have been collected from SMEs. Literature found that main barriers to e-commerce adoption among SMEs are organizational barriers, financial barriers, technical barriers, legal and regulatory barriers, and behavioral barriers. Findings: Of this study showed that all these barriers carried an average influence on ecommerce adoption. The most important factor barriers of e-commerce adoption are legal and regulatory barriers followed by technical barriers, whereas lack of internet security is the highest barrier factor that inhibits the implementation of e-commerce in SMEs followed by the requirement to undertake additional training and skill development. Practical implications: This paper is useful for the management of SMEs in understanding and gaining insights into the real and potential barriers to e-commerce adoption. This can help the organization to design strategy in taking up barriers tactfully to its advantage.",
"title": ""
},
{
"docid": "ed08636770f7bbfe4461edd8bd9a0d1b",
"text": "Traffic classification has been studied for two decades and applied to a wide range of applications from QoS provisioning and billing in ISPs to security-related applications in firewalls and intrusion detection systems. Port-based, data packet inspection, and classical machine learning methods have been used extensively in the past, but their accuracy have been declined due to the dramatic changes in the Internet traffic, particularly the increase in encrypted traffic. With the proliferation of deep learning methods, researchers have recently investigated these methods for traffic classification task and reported high accuracy. In this article, we introduce a general framework for deep-learning-based traffic classification. We present commonly used deep learning methods and their application in traffic classification tasks. Then, we discuss open problems and their challenges, as well as opportunities for traffic classification.",
"title": ""
},
{
"docid": "afdc57b5d573e2c99c73deeef3c2fd5f",
"text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.",
"title": ""
},
{
"docid": "102ad264e4a9a4a43a943f0895b61e96",
"text": "Power quality disturbance (PQD) monitoring has become an important issue due to the growing number of disturbing loads connected to the power line and to the susceptibility of certain loads to their presence. In any real power system, there are multiple sources of several disturbances which can have different magnitudes and appear at different times. In order to avoid equipment damage and estimate the damage severity, they have to be detected, classified, and quantified. In this work, a smart sensor for detection, classification, and quantification of PQD is proposed. First, the Hilbert transform (HT) is used as detection technique; then, the classification of the envelope of a PQD obtained through HT is carried out by a feed forward neural network (FFNN). Finally, the root mean square voltage (Vrms), peak voltage (Vpeak), crest factor (CF), and total harmonic distortion (THD) indices calculated through HT and Parseval's theorem as well as an instantaneous exponential time constant quantify the PQD according to the disturbance presented. The aforementioned methodology is processed online using digital hardware signal processing based on field programmable gate array (FPGA). Besides, the proposed smart sensor performance is validated and tested through synthetic signals and under real operating conditions, respectively.",
"title": ""
},
{
"docid": "e6dcae244f91dc2d7e843d9860ac1cfd",
"text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.",
"title": ""
},
{
"docid": "9f2d6c872761d8922cac8a3f30b4b7ba",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "cd98932832d8821a98032ae6bbef2576",
"text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.",
"title": ""
},
{
"docid": "f77107a84778699e088b94c1a75bfd78",
"text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.",
"title": ""
},
{
"docid": "fc69f1c092bae3328ce9c5975929e92c",
"text": "In allusion to the “on-line beforehand decision-making, real time matching”, this paper proposes the stability control flow based on PMU for interconnected power system, which is a real-time stability control. In this scheme, preventive control, emergency control and corrective control are designed to a closed-loop rolling control process, it will protect the stability of power system. Then it ameliorates the corrective control process, and presents a new control method which is based on PMU and EEAC method. This scheme can ensure the real-time quality and advance the veracity for the corrective control.",
"title": ""
},
{
"docid": "3c149399184f65f994c8f925a2417467",
"text": "Online social networks (OSNs) such as Facebook and Google+ have transformed the way our society communicates. However, this success has come at the cost of user privacy; in today's OSNs, users are not in control of their own data, and depend on OSN operators to enforce access control policies. A multitude of privacy breaches has spurred research into privacy-preserving alternatives for social networking, exploring a number of techniques for storing, disseminating, and controlling access to data in a decentralized fashion. In this paper, we argue that a combination of techniques is necessary to efficiently support the complex functionality requirements of OSNs.\n We propose Cachet, an architecture that provides strong security and privacy guarantees while preserving the main functionality of online social networks. In particular, Cachet protects the confidentiality, integrity and availability of user content, as well as the privacy of user relationships. Cachet uses a distributed pool of nodes to store user data and ensure availability. Storage nodes in Cachet are untrusted; we leverage cryptographic techniques such as attribute based encryption to protect the confidentiality of data. For efficient dissemination and retrieval of data, Cachet uses a hybrid structured-unstructured overlay paradigm in which a conventional distributed hash table is augmented with social links between users. Social contacts in our system act as caches to store recent updates in the social network, and help reduce the cryptographic as well as the communication overhead in the network.\n We built a prototype implementation of Cachet in the FreePastry simulator. To demonstrate the functionality of existing OSNs we implemented the \"newsfeed\" application. Our evaluation demonstrates that (a) decentralized architectures for privacy preserving social networking are feasible, and (b) use of social contacts for object caching results in significant performance improvements.",
"title": ""
},
{
"docid": "323eec69e6cd558ade788070cff58452",
"text": "OBJECTIVE\nTo report clinical signs, diagnostic and surgical or necropsy findings, and outcome in 2 calves with spinal epidural abscess (SEA).\n\n\nSTUDY DESIGN\nClinical report.\n\n\nANIMALS\nCalves (n=2).\n\n\nMETHODS\nCalves had neurologic examination, analysis and antimicrobial culture of cerebrospinal fluid (CSF), vertebral column radiographs, myelography, and in 1 calf, magnetic resonance imaging (MRI). A definitive diagnosis of SEA was confirmed by necropsy in 1 calf and during surgery and histologic examination of vertebral canal tissue in 1 calf.\n\n\nRESULTS\nClinical signs were difficulty in rising, ataxia, fever, apparent spinal pain, hypoesthesia, and paresis/plegia which appeared 15 days before admission. Calf 1 had pelvic limb weakness and difficulty standing and calf 2 had severe ataxia involving both thoracic and pelvic limbs. Extradural spinal cord compression was identified by myelography. SEA suspected in calf 1 with discospondylitis was confirmed at necropsy whereas calf 2 had MRI identification of the lesion and was successfully decompressed by laminectomy and SEA excision. Both calves had peripheral neutrophilia and calf 2 had neutrophilic pleocytosis in CSF. Bacteria were not isolated from CSF, from the surgical site or during necropsy. Calf 2 improved neurologically and had a good long-term outcome.\n\n\nCONCLUSION\nGood outcome in a calf with SEA was obtained after adequate surgical decompression and antibiotic administration.\n\n\nCLINICAL RELEVANCE\nSEA should be included in the list of possible causes of fever, apparent spinal pain, and signs of myelopathy in calves.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "ad78f226f21bd020e625659ad3ddbf74",
"text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.",
"title": ""
},
{
"docid": "4f287c788c7e95bf350a998650ff6221",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "86910fd866dd4945d044bd6057fe2010",
"text": "Context: The literature is rich in examples of both successful and failed global software development projects. However, practitioners do not have the time to wade through the many recommendations to work out which ones apply to them. To this end, we developed a prototype Decision Support System (DSS) for Global Teaming (GT), with the goal of making research results available to practitioners. Aims: We want the system we build to be based on the real needs of practitioners: the end users of our system. Therefore the aim of this study is to assess the usefulness and usability of our proof-of-concept in order to create a tool that is actually used by practitioners. Method: Twelve experts in GSD evaluated our system. Each individual participant tested the system and completed a short usability questionnaire. Results: Feedback on the prototype DSS was positive. All experts supported the concept, although many suggested areas that could be improved. Both expert practitioners and researchers participated, providing different perspectives on what we need to do to improve the system. Conclusion: Involving both practitioners (users) and researchers in the evaluation elicited a range of useful feedback, providing useful insights that might not have emerged had we focused on one or the other group. However, even when we implement recommended changes, we still need to persuade practitioner to adopt the new tool.",
"title": ""
}
] |
scidocsrr
|
3c458d55a85e23e1ce2b1e0c9fa11479
|
Exploring Learners' Sequential Behavioral Patterns, Flow Experience, and Learning Performance in an Anti-Phishing Educational Game
|
[
{
"docid": "40fbee18e4b0eca3f2b9ad69119fec5d",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
}
] |
[
{
"docid": "d14b66c5beb3f928fe4117c5fd29168a",
"text": "What do you do to start reading expert systems design and development? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this expert systems design and development.",
"title": ""
},
{
"docid": "74972989924aef7d8923d3297d221e23",
"text": "Emerging evidence suggests that a traumatic brain injury (TBI) in childhood may disrupt the ability to abstract the central meaning or gist-based memory from connected language (discourse). The current study adopts a novel approach to elucidate the role of immediate and working memory processes in producing a cohesive and coherent gist-based text in the form of a summary in children with mild and severe TBI as compared to typically developing children, ages 8-14 years at test. Both TBI groups showed decreased performance on a summary production task as well as retrieval of specific content from a long narrative. Working memory on n-back tasks was also impaired in children with severe TBI, whereas immediate memory performance for recall of a simple word list in both TBI groups was comparable to controls. Interestingly, working memory, but not simple immediate memory for a word list, was significantly correlated with summarization ability and ability to recall discourse content.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu",
"title": ""
},
{
"docid": "56a3a761606e699c3f21fb0fe1ecbf0a",
"text": "Internet banking (IB) has become one of the widely used banking services among Malaysian retail banking customers in recent years. Despite its attractiveness, customer loyalty towards Internet banking website has become an issue due to stiff competition among the banks in Malaysia. As the development and validation of a customer loyalty model in Internet banking website context in Malaysia had not been addressed by past studies, this study attempts to develop a model based on the usage of Information System (IS), with the purpose to investigate factors influencing customer loyalty towards Internet banking websites. A questionnaire survey was conducted with the sample consisting of Internet banking users in Malaysia. Factors that influence customer loyalty towards Internet banking website in Malaysia have been investigated and tested. The study also attempts to identify the most essential factors among those investigated: service quality, perceived value, trust, habit and reputation of the bank. Based on the findings, trust, habit and reputation are found to have a significant influence on customer loyalty towards individual Internet banking websites in Malaysia. As compared to trust or habit factors, reputation is the strongest influence. The results also indicated that service quality and perceived value are not significantly related to customer loyalty. Service quality is found to be an important factor in influencing the adoption of the technology, but did not have a significant influence in retention of customers. The findings have provided an insight to the internet banking providers on the areas to be focused on in retaining their customers.",
"title": ""
},
{
"docid": "0186ead8a32677289f73920af5a65d19",
"text": "The tall building is the most dominating symbol of the cities and a human-made marvel that defies gravity by reaching to the clouds. It embodies unrelenting human aspirations to build even higher. It conjures a number of valid questions in our minds. The foremost and fundamental question that is often asked: Why tall buildings? This review paper seeks to answer the question by laying out arguments against and for tall buildings. Then, it provides a brief account of the historic and recent developments of tall buildings including their status during the current economic recession. The paper argues that as cities continue to expand horizontally, to safeguard against their reaching an eventual breaking point, the tall building as a building type is a possible solution by way of conquering vertical space through agglomeration and densification. Case studies of some recently built tall buildings are discussed to illustrate the nature of tall building development in their respective cities. The paper attempts to dispel any discernment about tall buildings as mere pieces of art and architecture by emphasizing their truly speculative, technological, sustainable, and evolving nature. It concludes by projecting a vision of tall buildings and their integration into the cities of the 21st century.",
"title": ""
},
{
"docid": "cb33570878c6c66601fb0c73b148a6f3",
"text": "Für die automatisierte Bewertung von Lösungen zu Programmieraufgaben wurde mittlerweile eine Vielzah an Grader-Programmen zu unterschiedlichen Programmiersprachen entwickelt. U m Lernenden wie Lehrenden Zugang zur möglichst vielen Gradern über das gewohn te LMS zu ermöglichen wird das Konzept einer generischen Web-Serviceschni ttstelle (Grappa) vorgestellt, welches im Kontext einer Lehrveranstaltung evaluier t wurde.",
"title": ""
},
{
"docid": "5e60c55f419c7d62f4eeb9165e7f107c",
"text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.",
"title": ""
},
{
"docid": "14a0bfeff272ad41221f1db0405102ed",
"text": "In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field.",
"title": ""
},
{
"docid": "7a7c358eaa5752d6984a56429f58c556",
"text": "If the training dataset is not very large, image recognition is usually implemented with the transfer learning methods. In these methods the features are extracted using a deep convolutional neural network, which was preliminarily trained with an external very-large dataset. In this paper we consider the nonparametric classification of extracted feature vectors with the probabilistic neural network (PNN). The number of neurons at the pattern layer of the PNN is equal to the database size, which causes the low recognition performance and high memory space complexity of this network. We propose to overcome these drawbacks by replacing the exponential activation function in the Gaussian Parzen kernel to the complex exponential functions in the Fej\\'er kernel. We demonstrate that in this case it is possible to implement the network with the number of neurons in the pattern layer proportional to the cubic root of the database size. Thus, the proposed modification of the PNN makes it possible to significantly decrease runtime and memory complexities without loosing its main advantages, namely, extremely fast training procedure and the convergence to the optimal Bayesian decision. An experimental study in visual object category classification and unconstrained face recognition with contemporary deep neural networks have shown, that our approach obtains very efficient and rather accurate decisions for the small training sample in comparison with the well-known classifiers.",
"title": ""
},
{
"docid": "845ee0b77e30a01d87e836c6a84b7d66",
"text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.",
"title": ""
},
{
"docid": "16ffeb3b018764400b9739a4dae6d2f1",
"text": "This paper proposes an automatic blood vessel extraction method on retinal images using matched filtering in an integrated system design platform that involves curvelet transform and kernel based fuzzy c-means. Since curvelet transform represents the lines, the edges and the curvatures very well and in compact form (by less number of coefficients) compared to other multi-resolution techniques, this paper uses curvelet transform for enhancement of the retinal vasculature. Matched filtering is then used to intensify the blood vessels' response which is further employed by kernel based fuzzy c-means algorithm that extracts the vessel silhouette from the background through non-linear mapping. For pathological images, in addition to matched filtering, Laplacian of Gaussian filter is also employed to distinguish the step and the ramp like signal from that of vessel structure. To test the efficacy of the proposed method, the algorithm has also been applied to images in presence of additive white Gaussian noise where the curvelet transform has been used for image denoising. Performance is evaluated on publicly available DRIVE, STARE and DIARETDB1 databases and is compared with the large number of existing blood vessel extraction methodologies. Simulation results demonstrate that the proposed method is very much efficient in detecting the long and the thick as well as the short and the thin vessels with an average accuracy of 96.16% for the DRIVE and 97.35% for the STARE database wherein the existing methods fail to extract the tiny and the thin vessels.",
"title": ""
},
{
"docid": "8951e08b838294b61796717ad691378e",
"text": "In order to open-up enterprise applications to e-businessand make them profitable for a communication with otherenterprise applications, a business model is needed showingthe business essentials of the e-commerce business caseto be developed. Currently there are two major businessmodeling techniques - e3-value and REA (Resource-Event-Agent). Whereas e3-value was designed for modeling valueexchanges within an e-business network of multiple businesspartners, the REA ontology assumes that, in the presence ofmoney and available prices, all multi-party collaborationsmay be decomposed into a set of corresponding binarycollaborations. This paper is a preliminary attempt to viewe3-value and REA used side-by-side to see where they cancomplement each other in coordinated use in the context ofmultiple-partner collaboration. A real life scenario from theprint media domain has been taken to proof our approach.",
"title": ""
},
{
"docid": "47afea1e95f86bb44a1cf11e020828fc",
"text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.",
"title": ""
},
{
"docid": "18d8fe3f77ab8878ae2eb72b04fa8a48",
"text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.",
"title": ""
},
{
"docid": "64fbffe75209359b540617fac4930c44",
"text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.",
"title": ""
},
{
"docid": "5f563fd7eefd6d15951b4f47441daf36",
"text": "Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure-modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high-resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multiscale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super-resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.",
"title": ""
},
{
"docid": "f7bdf07ef7a45c3e261e4631743c1882",
"text": "Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sampleefficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actorcritic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sampleefficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learning deep RLbased dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.",
"title": ""
},
{
"docid": "5463fb7799ce9ac51ae2cd1b233cfdd5",
"text": "Modern workstation and network technology has made software-only solutions feasible for real-time playback of compressed continuous video and audio across the Internet. The Internet environment is characterised by widespread resource sharing, dynamic workload, great diversity in host processing speed and network bandwidth, no end-to-end resource reservation facility, and lack of a common clock. To meet the strict timing requirements of distributed multimedia presentation in the face of such characteristics requires new approaches to client/server synchronization, Quality-of-Service (QoS) control and system adaptiveness.",
"title": ""
},
{
"docid": "63013138a85755c9ca5f63385fff0afc",
"text": "OBJECTIVE\nTo provide an overview of the role of anxiety disorders in medical illness.\n\n\nMETHOD\nThe Anxiety Disorders Association of America held a multidisciplinary conference from which conference leaders and speakers reviewed presentations and discussions, considered literature on prevalence, comorbidity, etiology and treatment, and made recommendations for research. Irritable bowel syndrome (IBS), asthma, cardiovascular disease (CVD), cancer and chronic pain were reviewed.\n\n\nRESULTS\nA substantial literature supports clinically important associations between psychiatric illness and chronic medical conditions. Most research focuses on depression, finding that depression can adversely affect self-care and increase the risk of incident medical illness, complications and mortality. Anxiety disorders are less well studied, but robust epidemiological and clinical evidence shows that anxiety disorders play an equally important role. Biological theories of the interactions between anxiety and IBS, CVD and chronic pain are presented. Available data suggest that anxiety disorders in medically ill patients should not be ignored and could be considered conjointly with depression when developing strategies for screening and intervention, particularly in primary care.\n\n\nCONCLUSIONS\nEmerging data offer a strong argument for the role of anxiety in medical illness and suggest that anxiety disorders rival depression in terms of risk, comorbidity and outcome. Research programs designed to advance our understanding of the impact of anxiety disorders on medical illness are needed to develop evidence-based approaches to improving patient care.",
"title": ""
},
{
"docid": "916c7a159dd22d0a0c0d3f00159ad790",
"text": "The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15].",
"title": ""
}
] |
scidocsrr
|
44c82e17b87b43486843227896019418
|
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
|
[
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "ab2e9a230c9aeec350dff6e3d239c7d8",
"text": "Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustness to facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression image from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases, namely Multi-PIE and AR, show significant performance improvement of the commercial SDK to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "ae3a54128bb29272e5cb3552236b6f12",
"text": "Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation",
"title": ""
},
{
"docid": "bea319596dd62f7b26b5ec22ff58aadb",
"text": "We present a novel technique for texture mapping on arbitrar y su faces with minimal distortions, by preserving the local and globa l structure of the texture. The recent introduction of the fast marching metho d on triangulated surfaces [9], made it possible to compute geodesic distance s i O(~ n lg ~ n) where~ n is the number of triangles that represent the surface. We use this method to design a surface flattening approach based on multi -dimensional scaling (MDS). MDS is a family of methods that map a set of poin ts to a finite dimensional flat (Euclidean) domain, where the only gi ven data is the corresponding distances between every pair of points. The M DS mapping yields minimal changes of the distances between the corresp onding points. We then solve an ‘inverse’ problem and map a flat texture patch onto the curved surface while preserving the structure of the textur .",
"title": ""
},
{
"docid": "32f5fb533a0a043bdbed82845eb0665b",
"text": "We present a novel approach for the automatic creation of a personalized high-quality 3D face rig of an actor from just monocular video data (e.g., vintage movies). Our rig is based on three distinct layers that allow us to model the actor’s facial shape as well as capture his person-specific expression characteristics at high fidelity, ranging from coarse-scale geometry to fine-scale static and transient detail on the scale of folds and wrinkles. At the heart of our approach is a parametric shape prior that encodes the plausible subspace of facial identity and expression variations. Based on this prior, a coarse-scale reconstruction is obtained by means of a novel variational fitting approach. We represent person-specific idiosyncrasies, which cannot be represented in the restricted shape and expression space, by learning a set of medium-scale corrective shapes. Fine-scale skin detail, such as wrinkles, are captured from video via shading-based refinement, and a generative detail formation model is learned. Both the medium- and fine-scale detail layers are coupled with the parametric prior by means of a novel sparse linear regression formulation. Once reconstructed, all layers of the face rig can be conveniently controlled by a low number of blendshape expression parameters, as widely used by animation artists. We show captured face rigs and their motions for several actors filmed in different monocular video formats, including legacy footage from YouTube, and demonstrate how they can be used for 3D animation and 2D video editing. Finally, we evaluate our approach qualitatively and quantitatively and compare to related state-of-the-art methods.",
"title": ""
}
] |
[
{
"docid": "81fd8d4c38a65c5d0df0c849e8c080fc",
"text": "The paper presents two types of one cycle current control method for Triple Active Bridge(TAB) phase-shifted DC-DC converter integrating Renewable Energy Source(RES), Energy Storage System(ESS) and a output dc bus. The main objective of the current control methods is to control the transformer current in each cycle so that dc transients are eliminated during phase angle change from one cycle to the next cycle. In the proposed current control methods, the transformer currents are sampled within a switching cycle and the phase shift angles for the next switching cycle are generated based on sampled current values and current references. The discussed one cycle control methods also provide an inherent power decoupling feature for the three port phase shifted triple active bridge converter. Two different methods, (a) sampling and updating twice in a switching cycle and (b) sampling and updating once in a switching cycle, are explained in this paper. The current control methods are experimentally verified using digital implementation technique on a laboratory made hardware prototype.",
"title": ""
},
{
"docid": "12b178a26ba5f81a02568810df24b50f",
"text": "BACKGROUND\nMost previous studies of allied health professionals' evidence based practice (EBP) attitudes, knowledge and behaviours have been conducted with profession specific questionnaires of variable psychometric strength. This study compared the self-report EBP profiles of allied health professionals/trainees in an Australian university.\n\n\nMETHODS\nThe Evidence-Based Practice Profile (EBP2) questionnaire assessed five domains (Relevance, Terminology, Practice, Confidence, Sympathy) in 918 subjects from five professional disciplines. One and 2-way factorial analysis of variance (ANOVA) and t-tests analysed differences based on prior exposure to EBP, stage of training, professional discipline, age and gender.\n\n\nRESULTS\nThere were significant differences between stages of training (p < 0.001) for all domains and between EBP exposure groups for all but one domain (Sympathy). Professional discipline groups differed for Relevance, Terminology, Practice (p < 0.001) and Confidence (p = 0.006). Males scored higher for Confidence (p = 0.002) and females for Sympathy (p = 0.04), older subjects (> 24 years) scored higher for all domains (p < 0.05). Age and exposure affected all domains (p < 0.02). Differences in stages of training largely explained age-related differences in Confidence and Practice (p ≤ 0.001) and exposure-related differences in Confidence, Practice and Sympathy (p ≤ 0.023).\n\n\nCONCLUSIONS\nAcross five allied health professions, self-report EBP characteristics varied with EBP exposure, across stages of training, with profession and with age.",
"title": ""
},
{
"docid": "836001910512e8bd7f71f4ac7448a6dd",
"text": "We have developed a high-speed 1310-nm Al-MQW buried-hetero laser having 29-GHz bandwidth (BW). The laser was used to compare 28-Gbaud four-level pulse-amplitude-modulation (PAM4) and 56-Gb/s nonreturn to zero (NRZ) transmission performance. In both cases, it was possible to meet the 10-km link budget, however, 56-Gb/s NRZ operation achieved a 2-dB better sensitivity, attributable to the wide BW of the directly modulated laser and the larger eye amplitude for the NRZ format. On the other hand, the advantages for 28-Gbaud PAM4 were the reduced BW requirement for both the transmitter and the receiver PIN diode, which enabled us to use a lower bias to the laser and a PIN with a higher responsivity, or conversely enable the possibility of high temperature operation with lower power consumption. Both formats showed a negative dispersion penalty compared to back-to-back sensitivity using a negative fiber dispersion of -60 ps/nm, which was expected from the observed chirp characteristics of the laser. The reliability study up to 11 600 h at 85 °C under accelerated conditions showed no decrease in the output power at a constant bias of 60 mA.",
"title": ""
},
{
"docid": "6c992cd88e3531abc63b835a2a0fd67f",
"text": "Bitcoin introduces a revolutionary decentralized consensus mechanism. However, Bitcoin-derived consensus mechanisms applied to public blockchain are inadequate for the deployment scenarios of budding consortium blockchain. We propose a new consensus algorithm, Proof of Vote (POV). The consensus is coordinated by the distributed nodes controlled by consortium partners which will come to a decentralized arbitration by voting. The key idea is to establish different security identity for network participants, so that the submission and verification of the blocks are decided by the agencies' voting in the league without the depending on a third-party intermediary or uncontrollable public awareness. Compared with the fully decentralized consensus-Proof of Work (POW), POV has controllable security, convergence reliability, only one block confirmation to achieve the transaction finality, and low-delay transaction verification time.",
"title": ""
},
{
"docid": "d097d3c40a78d4bdda9facfdb9f45305",
"text": "There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers’ intuition of what constitutes a ‘good’ explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.",
"title": ""
},
{
"docid": "9aaae1995134469ffddea73baa7b911d",
"text": "We present probabilistic neural programs, a framework for program induction that permits flexible specification of both a computational model and inference algorithm while simultaneously enabling the use of deep neural networks. Probabilistic neural programs combine a computation graph for specifying a neural network with an operator for weighted nondeterministic choice. Thus, a program describes both a collection of decisions as well as the neural network architecture used to make each one. We evaluate our approach on a challenging diagram question answering task where probabilistic neural programs correctly execute nearly twice as many programs as a baseline model.",
"title": ""
},
{
"docid": "52a4af83304ad0a5fe3a77dfdfdabb6a",
"text": "Discovering semantic coherent topics from the large amount of user-generated content (UGC) in social media would facilitate many downstream applications of intelligent computing. Topic models, as one of the most powerful algorithms, have been widely used to discover the latent semantic patterns in text collections. However, one key weakness of topic models is that they need documents with certain length to provide reliable statistics for generating coherent topics. In Twitter, the users’ tweets are mostly short and noisy. Observations of word co-occurrences are incomprehensible for topic models. To deal with this problem, previous work tried to incorporate prior knowledge to obtain better results. However, this strategy is not practical for the fast evolving UGC in Twitter. In this paper, we first cluster the users according to the retweet network, and the users’ interests are mined as the prior knowledge. Such data are then applied to improve the performance of topic learning. The potential cause for the effectiveness of this approach is that users in the same community usually share similar interests, which will result in less noisy sub-data sets. Our algorithm pre-learns two types of interest knowledge from the data set: the interest-word-sets and a tweet-interest preference matrix. Furthermore, a dedicated background model is introduced to judge whether a word is drawn from the background noise. Experiments on two real life twitter data sets show that our model achieves significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "e2132912c7e715f464f3d7f2599c2644",
"text": "Data mining technology is applied to fraud detection to establish the fraud detection model, describe the process of creating the fraud detection model, then establish data model with ID3 decision tree, and establish example of fraud detection model by using this model. As e-commerce sales continue to grow, the associated online fraud remains an attractive source of revenue for fraudsters. These fraudulent activities impose a considerable financial loss to merchants, making online fraud detection a necessity. The problem of fraud detection is concerned with not only capturing the fraudulent activities, but also capturing them as quickly as possible. This timeliness is crucial to decrease financial losses.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "7b4d904c2a0d237614e9367df69550b3",
"text": "Microgrids are a new concept for future energy distribution systems that enable renewable energy integration and improved energy management capability. Microgrids consist of multiple distributed generators (DGs) that are usually integrated via power electronic inverters. In order to enhance power quality and power distribution reliability, microgrids need to operate in both grid-connected and island modes. Consequently, microgrids can suffer performance degradation as the operating conditions vary due to abrupt mode changes and variations in bus voltages and system frequency. This paper presents controller design and optimization methods to stably coordinate multiple inverter-interfaced DGs and to robustly control individual interface inverters against voltage and frequency disturbances. Droop-control concepts are used as system-level multiple DG coordination controllers, and control theory is applied to device-level inverter controllers. Optimal control parameters are obtained by particle-swarm-optimization algorithms, and the control performance is verified via simulation studies.",
"title": ""
},
{
"docid": "5cd02bee9380641e6b6a6d3dd0cdc257",
"text": "Device drivers are an essential part in modern Unix-like systems to handle operations on physical devices, from hard disks and printers to digital cameras and Bluetooth speakers. The surge of new hardware, particularly on mobile devices, introduces an explosive growth of device drivers in system kernels. Many such drivers are provided by third-party developers, which are susceptible to security vulnerabilities and lack proper vetting. Unfortunately, the complex input data structures for device drivers render traditional analysis tools, such as fuzz testing, less effective, and so far, research on kernel driver security is comparatively sparse. In this paper, we present DIFUZE, an interface-aware fuzzing tool to automatically generate valid inputs and trigger the execution of the kernel drivers. We leverage static analysis to compose correctly-structured input in the userspace to explore kernel drivers. DIFUZE is fully automatic, ranging from identifying driver handlers, to mapping to device file names, to constructing complex argument instances. We evaluate our approach on seven modern Android smartphones. The results show that DIFUZE can effectively identify kernel driver bugs, and reports 32 previously unknown vulnerabilities, including flaws that lead to arbitrary code execution.",
"title": ""
},
{
"docid": "bf152c9b8937f84b3a7796133a5f0749",
"text": "This paper proposes a robust sensor fusion algorithm to accurately track the spatial location and motion of a human under various dynamic activities, such as walking, running, and jumping. The position accuracy of the indoor wireless positioning systems frequently suffers from non-line-of-sight and multipath effects, resulting in heavy-tailed outliers and signal outages. We address this problem by integrating the estimates from an ultra-wideband (UWB) system and inertial measurement units, but also taking advantage of the estimated velocity and height obtained from an aiding lower body biomechanical model. The proposed method is a cascaded Kalman filter-based algorithm where the orientation filter is cascaded with the robust position/velocity filter. The outliers are detected for individual measurements using the normalized innovation squared, where the measurement noise covariance is softly scaled to reduce its weight. The positioning accuracy is further improved with the Rauch–Tung–Striebel smoother. The proposed algorithm was validated against an optical motion tracking system for both slow (walking) and dynamic (running and jumping) activities performed in laboratory experiments. The results show that the proposed algorithm can maintain high accuracy for tracking the location of a subject in the presence of the outliers and UWB signal outages with a combined 3-D positioning error of less than 13 cm.",
"title": ""
},
{
"docid": "cd16a2df18ca2667da9b05b3417ecbc4",
"text": "Social network sites (SNS) have attracted considerable attention among teens and young adults who tend to connect and share common interest. Despite this popularity, the issue of students’ adoption of social network sites is still being unexplored fully in Malaysia. Driven by this factor, this study was designed to analyze the impact of social network sites on students’ academic performance in Malaysia. Using a conceptual approach, the study gathered that more students prefer the use of Facebook and Twitter in academic related discussions in complementingconventional classroom teaching and learning process. Thus, it is imperative that lecturers and academic institutions should implement the use of these applications in promoting academic excellence. As for profit oriented organizations such as bookshops, computer and smartphoneone vendors, they can promote their products through these applications and engage students to make purchases via them having understood that many students prefer and use Facebook, Twitter and Google+. The discussion from this study however does not represent the general sampling of Malaysian university students.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9ee83c40df6b97eaf502628af1434376",
"text": "Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object's appearance, such as the presence of component parts. We exploit locality-sensitive hashing to replace the dot-product kernel operator in the convolution with a fixed number of hash-table probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM. This represents a speed-up of approximately 20,000 times - four orders of magnitude - when compared with performing the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20% of the classes.",
"title": ""
},
{
"docid": "b53c46bc41237333f68cf96208d0128c",
"text": "Practical pattern classi cation and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classi ed. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classi cation and knowledge discovery.",
"title": ""
},
{
"docid": "9c507a2b1f57750d1b4ffeed6979a06f",
"text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.",
"title": ""
},
{
"docid": "f8dc1eb09fb8f13b02f8e17734190b9f",
"text": "The aim of this paper is to show the possibility to harvest RF energy to supply wireless sensor networks in an outdoor environment. In those conditions, the number of existing RF bands is unpredictable. The RF circuit has to harvest all the potential RF energy present and cannot be designed for a single RF tone. In this paper, the designed RF harvester adds powers coming from an unlimited number of sub-frequency bands. The harvester's output voltage ratios increase with the number of RF bands. As an application example, a 4-RF band rectenna is designed. The system harvests energy from GSM900 (Global System for Mobile Communications), GSM1800, UMTS (Universal Mobile Telecommunications System) and WiFi bands simultaneously. RF-to-dc conversion efficiency is measured at 62% for a cumulative -10-dBm input power homogeneously widespread over the four RF bands and reaches 84% at 5.8 dBm. The relative error between the measured dc output power with all four RF bands on and the ideal sum of each of the four RF bands power contribution is less than 3%. It is shown that the RF-to-dc conversion efficiency is more than doubled compared to that measured with a single RF source, thanks to the proposed rectifier architecture.",
"title": ""
},
{
"docid": "7196b6f6b14827d60f968534d52b4852",
"text": "Therapeutic applications of the psychedelics or hallucinogens found cross-culturally involve treatment of a variety of physical, psychological, and social maladies. Modern medicine has similarly found that a range of conditions may be successfully treated with these agents. The ability to treat a wide variety of conditions derives from variation in active ingredients, doses and modes of application, and factors of set and setting manipulated in ritual. Similarities in effects reported cross-culturally reflect biological mechanisms, while success in the treatment of a variety of specific psychological conditions points to the importance of ritual in eliciting their effects. Similar bases involve action on the serotonin and dopamine neurotransmitter systems that can be characterized as psychointegration: an elevation of ancient brain processes. Therapeutic Application of Sacred Medicines in the Premodern and Modern World Societies worldwide have discovered therapeutic applications of psychoactive plants, often referred to as sacred medicines, particularly those called psychedelics or hallucinogens. Hundreds of species of such plants and fungi were used for medicinal and religious purposes (see Schultes et al. 1992; Rätsch 2005), as well as for a variety of psychological and social conditions, culture-bound syndromes, and Thanks to Ilsa Jerome for providing some updated references for this paper. M. J. Winkelman (&) Retired from the School of Human Evolution and Social Change, Arizona State University Tempe Arizona, Caixa Postal 62, Pirenópolis, GO 72980-000, Brazil e-mail: michaeljwinkelman@gmail.com B. C. Labate and C. Cavnar (eds.), The Therapeutic Use of Ayahuasca, DOI: 10.1007/978-3-642-40426-9_1, Springer-Verlag Berlin Heidelberg 2014 1 a range of physical diseases (see Schultes and Winkelman 1996). This review illustrates the range of uses and the diverse potential of these substances for addressing human maladies. The ethnographic data on indigenous uses of these substances, combined with a brief overview of some of the modern medical studies, illustrate that a wide range of effects are obtained with these plants. These cultural therapies involve both pharmacological and ritual manipulations. Highly developed healing traditions selectively utilized different species of the same genus, different preparation methods and doses, varying admixtures, and a variety of ritual and psychotherapeutic processes to obtain specific desired effects. The wide range of uses of these plants suggests that they can contribute new active ingredients for modern medicine, particularly in psychiatry. As was illustrated by our illustrious contributors to Psychedelic Medicine (Winkelman and Roberts 2007a, b), there are a number of areas in which psychedelics have been established in treating what have been considered intractable health problems. While double-blind clinical trials have been sparse (but see Griffiths et al. 2006), this is not due to the lack of evidence for efficacy, but rather the administrative prohibitions that have drastically restricted clinical research. Nonetheless, using the criteria of phases of clinical evaluation, Winkelman and Roberts (2007c) concluded that there is at least Phase II evidence for the effectiveness of most of these psychedelics, supporting the continuation of more advanced trials. Furthermore, their success with the often intractable maladies, ranging from depression and cluster headaches to posttraumatic stress disorder (PTSD), obsessive-compulsive disorders, wasting syndromes, and addictions justifies their immediate use with these desperate patient populations. In addition, the wide variety of therapeutic uses found for these substances in cultures around the world suggest the potential for far greater applications. Therapeutic Uses of Psilocybin-containing ‘‘Magic Mushrooms’’ The Aztecs called these fungi teonanacatl, meaning ‘‘food of the gods’’; there is evidence of the use of psilocybin-containing mushrooms from many different genera in ritual healing practices in cultures around the world and deep in prehistory (see Rätsch 2005). One of the best documented therapeutic uses of psilocybin involves Maria Sabina, the Mazatec ‘‘Wise One’’ (Estrada 1981). Several different Psilocybe species are used by the Mazatec, as well as mushrooms of the Conocybe genera. In addition, other psychoactive plants are also employed, including Salvia divinorum Epl. and tobacco (Nicotiana rustica L., Solanaceae). 1 Phase II studies or trials use small groups of selected patients to determine effectiveness and ideal doses for a specific illness after Phase I trials have established safety (lack of toxicity) and safe dose ranges. 2 M. J. Winkelman",
"title": ""
},
{
"docid": "0eae6fe59e90ff07e8aa831a3a4029f6",
"text": "This paper presents the design and fabrication of a zone plate Fresnel lens. 3D Printing is used for rapid prototyping this low-cost and light-weight lens to operate at 10 GHz. This lens is comprised of four different 3D printed dielectric zones to form phase compensation in a Fresnel lens. The dielectric zones are fabricated with different infill percentage to create tailored dielectric constants. The dielectric lens offers 18 dBi directivity at 10 GHz when illuminated by a waveguide source.",
"title": ""
}
] |
scidocsrr
|
9d8acaf543eef0de4382761bdfe3a397
|
Applications, Architectures, and Protocol Design Issues for Mobile Social Networks: A Survey
|
[
{
"docid": "4ff50e433ba7a5da179c7d8e5e05cb22",
"text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.",
"title": ""
},
{
"docid": "a8265b42dca4a70a017960fa064d728e",
"text": "Community is an important attribute of Pocket Switched Networks (PSN), because mobile devices are carried by people who tend to belong to communities. We analysed community structure from mobility traces and used for forwarding algorithms [12], which shows significant impact of community. Here, we propose and evaluate three novel distributed community detection approaches with great potential to detect both static and temporal communities. We find that with suitable configuration of the threshold values, the distributed community detection can approximate their corresponding centralised methods up to 90% accuracy.",
"title": ""
}
] |
[
{
"docid": "6c6eb7e817e210808018506953af1031",
"text": "BACKGROUND\nNurses constitute the largest human resource element and have a great impact on quality of care and patient outcomes in health care organizations. The objective of this study was to examine the relationship between rewards and nurse motivation on public hospitals administrated by Addis Ababa health bureau.\n\n\nMETHODS\nA cross-sectional survey was conducted from June to December 2010 in 5 public hospitals in Addis Ababa. Among 794 nurses, 259 were selected as sample. Data was collected using self-administered questionnaire. After the data was collected, it was analysed using SPSS version 16.0 statistical software. The results were analysed in terms of descriptive statistics followed by inferential statistics on the variables.\n\n\nRESULTS\nA total of 230 questionnaires were returned from 259 questionnaires distributed to respondents. Results of the study revealed that nurses are not motivated and there is a statistical significant relationship between rewards and the nurse work motivation and a payment is the most important and more influential variable. Furthermore, there is significant difference in nurse work motivation based on age, educational qualification and work experience while there is no significant difference in nurse work motivation based on gender.\n\n\nCONCLUSION\nThe study shows that nurses are less motivated by rewards they received while rewards have significant and positive contribution for nurse motivation. Therefore, both hospital administrators' and Addis Ababa health bureau should revise the existing nurse motivation strategy.",
"title": ""
},
{
"docid": "a1ce51b0d9c54ef4b2bd3d797cb7425c",
"text": "Classification and segmentation of 3D point clouds are important tasks in computer vision. Because of the irregular nature of point clouds, most of the existing methods convert point clouds into regular 3D voxel grids before they are used as input for ConvNets. Unfortunately, voxel representations are highly insensitive to the geometrical nature of 3D data. More recent methods encode point clouds to higher dimensional features to cover the global 3D space. However, these models are not able to sufficiently capture the local structures of point clouds. Therefore, in this paper, we propose a method that exploits both local and global contextual cues imposed by the k-d tree. The method is designed to learn representation vectors progressively along the tree structure. Experiments on challenging benchmarks show that the proposed model provides discriminative point set features. For the task of 3D scene semantic segmentation, our method significantly outperforms the state-of-the-art on the Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS).",
"title": ""
},
{
"docid": "d03d9216b3c2e9dd7165fa9402b5cd57",
"text": "Area of image inpainting over relatively large missing regions recently advanced substantially through adaptation of dedicated deep neural networks. However, current network solutions still introduce undesired artifacts and noise to the repaired regions. We present an image inpainting method that is based on the celebrated generative adversarial network (GAN) framework. The proposed PGGAN method includes a discriminator network that combines a global GAN (G-GAN) architecture with a patchGAN approach. PGGAN first shares network layers between G-GAN and patchGAN, then splits paths to produce two adversarial losses that feed the generator network in order to capture both local continuity of image texture and pervasive global features in images. The proposed framework is evaluated extensively, and the results including comparison to recent state-of-the-art demonstrate that it achieves considerable improvements on both visual and quantitative evaluations.",
"title": ""
},
{
"docid": "60f9a34771b844228e1d8da363e89359",
"text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.",
"title": ""
},
{
"docid": "281fe7b4b26ead35e7ce0d2ea354f002",
"text": "BACKGROUND\nThe safety and the effects of different trajectories on thumb motion of suture-button suspensionplasty post-trapeziectomy are not known.\n\n\nMETHODS\nIn a cadaveric model, thumb range of motion, trapeziectomy space height, and distance between the device and nerve to the first dorsal interosseous muscle (first DI) were measured for proximal and distal trajectory groups. Proximal trajectory was defined as a suture button angle directed from the thumb metacarpal to the second metacarpal at a trajectory less than 60° from the horizontal; distal trajectory was defined as a suture button angle directed from the thumb metacarpal to the second metacarpal at a trajectory of greater than 60° from the horizontal (Fig. 1).\n\n\nRESULTS\nThere were no significant differences in range of motion and trapeziectomy space height between both groups. The device was significantly further away from the nerve to the first DI in the proximal trajectory group compared to the distal trajectory group, but was still safely away from the nerve in both groups (greater than 1 cm).\n\n\nCONCLUSIONS\nThese results suggest that the device placement in either a proximal or distal location on the second metacarpal will yield similar results regarding safety and thumb range of motion.",
"title": ""
},
{
"docid": "cb4bf3bc76586e455dc863bc1ca2800e",
"text": "Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost.",
"title": ""
},
{
"docid": "9083d1159628f0b9a363aca5dea47591",
"text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.",
"title": ""
},
{
"docid": "a72efe95f299903639756f7501a6900b",
"text": "With the advent of the Internet of Things (IoT), communication between connected machines has become necessity. We simulate the communication of IoT by short-lived instant messaging for group communication. Group communication security requires such measures as group forward and backward secrecy and perfect forward secrecy. We satisfy these security measures by using a group controller and Attributebased Encryption (ABE) to encrypt data on update procedures. The communication overhead is outsourced to a mediating MQ Telemetry Transport broker. Thus, we decrease the costs for group joins and leaves to T(1). The number of attributes used in the system are reduced to O(log(N)), where N represents the maximum number of members. We provide an intuitive approach to fit the maximum number N = 2k members to our requirements and to increase the maximum size of members, if needed by N = 2k+1.",
"title": ""
},
{
"docid": "44a4fb2e14de16ae13ab072dc72018fb",
"text": "Objective: The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Methods: Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Results: Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Conclusion: Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. Significance: HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products.",
"title": ""
},
{
"docid": "2e41b44f2dc3b429f0ff11861ba93a14",
"text": "With the economic successes of several Asian economies and their increasingly important roles in the global financial market, the prediction of Asian stock markets has becoming a hot research area. As Asian stock markets are highly dynamic and exhibit wide variation, it may more realistic and practical that assumed the stock indexes of Asian stock markets are nonlinear mixture data. In this research, a time series prediction model by combining nonlinear independent component analysis (NLICA) and neural network is proposed to forecast Asian stock markets. NLICA is a novel feature extraction technique to find independent sources from observed nonlinear mixture data where no relevant data mixing mechanisms are available. In the proposed method, we first use NLICA to transform the input space composed of original time series data into the feature space consisting of independent components representing underlying information of the original data. Then, the ICs are served as the input variables of the neural network to build prediction model. Among the Asian stock markets, Japanese and China’s stock markets are the biggest two in Asia and they respectively represent the two types of stock markets. Therefore, in order to evaluate the performance of the proposed approach, the Nikkei 225 closing index and Shanghai B-share closing index are used as illustrative examples. Experimental results show that the proposed forecasting model not only improves the prediction accuracy of the neural network approach but also outperforms the three comparison methods. The proposed stock index prediction model can be therefore a good alternative for Asian stock market indexes. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f70dc802c631c4bda7de2de78217411a",
"text": "Researchers, technology reviewers, and governmental agencies have expressed concern that automation may necessitate the introduction of added displays to indicate vehicle intent in vehicle-to-pedestrian interactions. An automated online methodology for obtaining communication intent perceptions for 30 external vehicle-to-pedestrian display concepts was implemented and tested using Amazon Mechanic Turk. Data from 200 qualified participants was quickly obtained and processed. In addition to producing a useful early-stage evaluation of these specific design concepts, the test demonstrated that the methodology is scalable so that a large number of design elements or minor variations can be assessed through a series of runs even on much larger samples in a matter of hours. Using this approach, designers should be able to refine concepts both more quickly and in more depth than available development resources typically allow. Some concerns and questions about common assumptions related to the implementation of vehicle-to-pedestrian displays are posed.",
"title": ""
},
{
"docid": "be06fc67973751b98dd07599e29e4b01",
"text": "The contactless version of the air-filled substrate integrated waveguide (AF-SIW) is introduced for the first time. The conventional AF-SIW configuration requires a pure and flawless connection of the covering layers to the intermediate substrate. To operate efficiently at high frequencies, this requires a costly fabrication process. In the proposed configuration, the boundary condition on both sides around the AF guiding medium is modified to obtain artificial magnetic conductor (AMC) boundary conditions. The AMC surfaces on both sides of the waveguide substrate are realized by a single-periodic structure with the new type of unit cells. The PEC–AMC parallel plates prevent the leakage of the AF guiding region. The proposed contactless AF-SIW shows low-loss performance in comparison with the conventional AF-SIW at millimeter-wave frequencies when the layers of both waveguides are connected poorly.",
"title": ""
},
{
"docid": "c57c69fd1858b50998ec9706e34f6c46",
"text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "2265121606a423d581ca696a9b7cee31",
"text": "Heterochromatin protein 1 (HP1) was first described in Drosophila melanogaster as a heterochromatin associated protein with dose-dependent effect on gene silencing. The HP1 family is evolutionarily highly conserved and there are multiple members within the same species. The multi-functionality of HP1 reflects its ability to interact with diverse nuclear proteins, ranging from histones and transcriptional co-repressors to cohesion and DNA replication factors. As its name suggests, HP1 is well-known as a silencing protein found at pericentromeres and telomeres. In contrast to previous views that heterochromatin is transcriptionally inactive; noncoding RNAs transcribed from heterochromatic DNA repeats regulates the assembly and function of heterochromatin ranging from fission yeast to animals. Moreover, more recent progress has shed light on the paradoxical properties of HP1 in the nucleus and has revealed, unexpectedly, its existence in the euchromatin. Therefore, HP1 proteins might participate in both transcription repression in heterochromatin and euchromatin.",
"title": ""
},
{
"docid": "a0501b0b3ba110692f9b162ce5f72c05",
"text": "RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers’ toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.",
"title": ""
},
{
"docid": "1a8954d4cacde8eb4785a4192a3ed070",
"text": "This study examined the production and perception of English vowels by highly experienced native Italian speakers of English. The subjects were selected on the basis of the age at which they arrived in Canada and began to learn English, and how much they continued to use Italian. Vowel production accuracy was assessed through an intelligibility test in which native English-speaking listeners attempted to identify vowels spoken by the native Italian subjects. Vowel perception was assessed using a categorial discrimination test. The later in life the native Italian subjects began to learn English, the less accurately they produced and perceived English vowels. Neither of two groups of early Italian/English bilinguals differed significantly from native speakers of English either for production or perception. This finding is consistent with the hypothesis of the speech learning model [Flege, in Speech Perception and Linguistic Experience: Theoretical and Methodological Issues (York, Timonium, MD, 1995)] that early bilinguals establish new categories for vowels found in the second language (L2). The significant correlation observed to exist between the measures of L2 vowel production and perception is consistent with another hypothesis of the speech learning model, viz., that the accuracy with which L2 vowels are produced is limited by how accurately they are perceived.",
"title": ""
},
{
"docid": "9a033f2ba2dc67f7beb2a86c13f91793",
"text": "Plasticity is an intrinsic property of the human brain and represents evolution's invention to enable the nervous system to escape the restrictions of its own genome and thus adapt to environmental pressures, physiologic changes, and experiences. Dynamic shifts in the strength of preexisting connections across distributed neural networks, changes in task-related cortico-cortical and cortico-subcortical coherence and modifications of the mapping between behavior and neural activity take place in response to changes in afferent input or efferent demand. Such rapid, ongoing changes may be followed by the establishment of new connections through dendritic growth and arborization. However, they harbor the danger that the evolving pattern of neural activation may in itself lead to abnormal behavior. Plasticity is the mechanism for development and learning, as much as a cause of pathology. The challenge we face is to learn enough about the mechanisms of plasticity to modulate them to achieve the best behavioral outcome for a given subject.",
"title": ""
},
{
"docid": "11ddbce61cb175e9779e0fcb5622436f",
"text": "When rewards are sparse and efficient exploration essential, deep Q-learning with -greedy exploration tends to fail. This poses problems for otherwise promising domains such as task-oriented dialog systems, where the primary reward signal, indicating successful completion, typically occurs only at the end of each episode but depends on the entire sequence of utterances. A poor agent encounters such successful dialogs rarely, and a random agent may never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialog systems. First, we demonstrate that exploration by Thompson sampling, using Monte Carlo samples from a Bayes-by-Backprop neural network, yields marked improvement over standard DQNs with Boltzmann or -greedy exploration. Second, we show that spiking the replay buffer with a small number of successes, as are easy to harvest for dialog tasks, can make Q-learning feasible when it might otherwise fail catastrophically.",
"title": ""
},
{
"docid": "200da22a331c381bf46b901879273970",
"text": "The explosive growth in volume, velocity, and diversity of data produced by mobile devices and cloud applications has contributed to the abundance of data or ‘big data.’ Available solutions for efficient data storage and management cannot fulfill the needs of such heterogeneous data where the amount of data is continuously increasing. For efficient retrieval and management, existing indexing solutions become inefficient with the rapidly growing index size and seek time and an optimized index scheme is required for big data. Regarding real-world applications, the indexing issue with big data in cloud computing is widespread in healthcare, enterprises, scientific experiments, and social networks. To date, diverse soft computing, machine learning, and other techniques in terms of artificial intelligence have been utilized to satisfy the indexing requirements, yet in the literature, there is no reported state-of-the-art survey investigating the performance and consequences of techniques for solving indexing in big data issues as they enter cloud computing. The objective of this paper is to investigate and examine the existing indexing techniques for big data. Taxonomy of indexing techniques is developed to provide insight to enable researchers understand and select a technique as a basis to design an indexing mechanism with reduced time and space consumption for BD-MCC. In this study, 48 indexing techniques have been studied and compared based on 60 articles related to the topic. The indexing techniques’ performance is analyzed based on their characteristics and big data indexing requirements. The main contribution of this study is taxonomy of categorized indexing techniques based on their method. The categories are non-artificial intelligence, artificial intelligence, and collaborative artificial intelligence indexing methods. In addition, the significance of different procedures and performance is analyzed, besides limitations of each technique. In conclusion, several key future research topics with potential to accelerate the progress and deployment of artificial intelligence-based cooperative indexing in BD-MCC are elaborated on.",
"title": ""
}
] |
scidocsrr
|
3f83a103280b1660d51b646aa4580c85
|
Extending UTAUT2 To Explore Consumer Adoption Of Mobile Payments
|
[
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
}
] |
[
{
"docid": "5475df204bca627e73b077594af29d47",
"text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.",
"title": ""
},
{
"docid": "6bc942f7f78c8549d60cc4be5e0b467a",
"text": "In this study, we propose a novel, lightweight approach to real-time detection of vehicles using parts at intersections. Intersections feature oncoming, preceding, and cross traffic, which presents challenges for vision-based vehicle detection. Ubiquitous partial occlusions further complicate the vehicle detection task, and occur when vehicles enter and leave the camera's field of view. To confront these issues, we independently detect vehicle parts using strong classifiers trained with active learning. We match part responses using a learned matching classification. The learning process for part configurations leverages user input regarding full vehicle configurations. Part configurations are evaluated using Support Vector Machine classification. We present a comparison of detection results using geometric image features and appearance-based features. The full vehicle detection by parts has been evaluated on real-world data, runs in real time, and shows promise for future work in urban driver assistance.",
"title": ""
},
{
"docid": "a51a3e1ae86e4d178efd610d15415feb",
"text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.",
"title": ""
},
{
"docid": "8686ffed021b68574b4c3547d361eac8",
"text": "* To whom all correspondence should be addressed. Abstract Face detection is an important prerequisite step for successful face recognition. The performance of previous face detection methods reported in the literature is far from perfect and deteriorates ungracefully where lighting conditions cannot be controlled. We propose a method that outperforms state-of-the-art face detection methods in environments with stable lighting. In addition, our method can potentially perform well in environments with variable lighting conditions. The approach capitalizes upon our near-IR skin detection method reported elsewhere [13][14]. It ascertains the existence of a face within the skin region by finding the eyes and eyebrows. The eyeeyebrow pairs are determined by extracting appropriate features from multiple near-IR bands. Very successful feature extraction is achieved by simple algorithmic means like integral projections and template matching. This is because processing is constrained in the skin region and aided by the near-IR phenomenology. The effectiveness of our method is substantiated by comparative experimental results with the Identix face detector [5].",
"title": ""
},
{
"docid": "6d50ff00babb00d36a30fdc769091b7e",
"text": "The purpose of Advanced Driver Assistance Systems (ADAS) is that driver error will be reduced or even eliminated, and efficiency in traffic and transport is enhanced. The benefits of ADAS implementations are potentially considerable because of a significant decrease in human suffering, economical cost and pollution. However, there are also potential problems to be expected, since the task of driving a ordinary motor vehicle is changing in nature, in the direction of supervising a (partly) automated moving vehicle.",
"title": ""
},
{
"docid": "83e5f62d7f091260d4ae91c2d8f72d3d",
"text": "Document recognition and retrieval technologies complement one another, providing improved access to increasingly large document collections. While recognition and retrieval of textual information is fairly mature, with wide-spread availability of optical character recognition and text-based search engines, recognition and retrieval of graphics such as images, figures, tables, diagrams, and mathematical expressions are in comparatively early stages of research. This paper surveys the state of the art in recognition and retrieval of mathematical expressions, organized around four key problems in math retrieval (query construction, normalization, indexing, and relevance feedback), and four key problems in math recognition (detecting expressions, detecting and classifying symbols, analyzing symbol layout, and constructing a representation of meaning). Of special interest is the machine learning problem of jointly optimizing the component algorithms in a math recognition system, and developing effective indexing, retrieval and relevance feedback algorithms for math retrieval. Another important open problem is developing user interfaces that seamlessly integrate recognition and retrieval. Activity in these important research areas is increasing, in part because math notation provides an excellent domain for studying problems common to many document and graphics recognition and retrieval applications, and also because mature applications will likely provide substantial benefits for education, research, and mathematical literacy.",
"title": ""
},
{
"docid": "7ca6ea8592c0bd3a31108221975f9470",
"text": "BACKGROUND\nThe dermoscopic patterns of pigmented skin tumors are influenced by the body site.\n\n\nOBJECTIVE\nTo evaluate the clinical and dermoscopic features associated with pigmented vulvar lesions.\n\n\nMETHODS\nRetrospective analysis of clinical and dermoscopic images of vulvar lesions. The χ² test was used to test the association between clinical data and histopathological diagnosis.\n\n\nRESULTS\nA total of 42 (32.8%) melanocytic and 86 (67.2%) nonmelanocytic vulvar lesions were analyzed. Nevi significantly prevailed in younger women compared with melanomas and melanosis and exhibited most commonly a globular/cobblestone (51.3%) and a mixed (21.6%) pattern. Dermoscopically all melanomas showed a multicomponent pattern. Melanotic macules showed clinical overlapping features with melanoma, but their dermoscopic patterns differed significantly from those observed in melanomas.\n\n\nCONCLUSION\nThe diagnosis and management of pigmented vulvar lesions should be based on a good clinicodermoscopic correlation. Dermoscopy may be helpful in the differentiation of solitary melanotic macules from early melanoma.",
"title": ""
},
{
"docid": "5f39990b87532cd3189c7d4adb2cd144",
"text": "The abundance of data in the context of smart cities yields huge potential for data-driven businesses but raises unprecedented challenges on data privacy and security. Some of these challenges can be addressed merely through appropriate technical measures, while other issues can only be solved through strategic organizational decisions. In this paper, we present few cases from a real smart city project. We outline some exemplary data analytics scenarios and describe the measures that we adopt for a secure handling of data. Finally, we show how the chosen solutions impact the awareness of the public and acceptability of the project.",
"title": ""
},
{
"docid": "160e06b33d6db64f38480c62989908fb",
"text": "A theoretical and experimental study has been performed on a low-profile, 2.4-GHz dipole antenna that uses a frequency-selective surface (FSS) with varactor-tuned unit cells. The tunable unit cell is a square patch with a small aperture on either side to accommodate the varactor diodes. The varactors are placed only along one dimension to avoid the use of vias and simplify the dc bias network. An analytical circuit model for this type of electrically asymmetric unit cell is shown. The measured data demonstrate tunability from 2.15 to 2.63 GHz with peak gains at broadside that range from 3.7- to 5-dBi and instantaneous bandwidths of 50 to 280 MHz within the tuning range. It is shown that tuning for optimum performance in the presence of a human-core body phantom can be achieved. The total antenna thickness is approximately λ/45.",
"title": ""
},
{
"docid": "0c306bc52ad6b89b5cb8a01250699226",
"text": "Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks. However, the majority of autonomous RL algorithms require a large number of interactions with the environment. A large number of interactions may be impractical in many real-world applications, such as robotics, and many practical systems have to obey limitations in the form of state space or control constraints. To reduce the number of system interactions while simultaneously handling constraints, we propose a modelbased RL framework based on probabilistic Model Predictive Control (MPC). In particular, we propose to learn a probabilistic transition model using Gaussian Processes (GPs) to incorporate model uncertainty into longterm predictions, thereby, reducing the impact of model errors. We then use MPC to find a control sequence that minimises the expected long-term cost. We provide theoretical guarantees for first-order optimality in the GP-based transition models with deterministic approximate inference for long-term planning. We demonstrate that our approach does not only achieve state-of-the-art data efficiency, but also is a principled way for RL in constrained environments.",
"title": ""
},
{
"docid": "3a0d2784b1115e82a4aedad074da8c74",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "94e8ff3d15926e8f216b81e6c09a55a5",
"text": "Purpose Sustainability and Sustainable Development should be the top priorities of a Smarter Planet. On the basis of this statement, our aim is to highlight opportunities of knowledge co-creation that derive from the integration of the research efforts of two communities of scientists, scholars and professionals, recognized worldwide that share a common vision of a smarter and more sustainable planet: Service Science and Sustainability Science. Design/Methodology/approach By adopting a systems thinking view, and specifically the Viable Systems Approach (VSA), the paper analyses the scientific positioning of Service Science and Sustainability Science, and, through a Service-Dominant Logic co-creation approach, seeks commonalities that can highlight opportunities of fruitful scientific collaboration. Findings The paper evidences significant convergence in the views and scientific positioning of Service Science and Sustainability Science, clarifying why the two communities should collaborate by integrating knowledge resources and sharing advances. By promoting a boundary crossing interaction and creating interface connections within and between the two scientific communities, gaps can be removed and relevant bridging elements explored and exploited in a shared effort targeted to realizing a smarter and more sustainable world. The common interand transdisciplinary as well as solution-oriented research approach appears a key methodological element of convergence for developing a shared framework of reference coherently. A “3Pillars”Knowledge Co-creation Framework for Service & Sustainability Science integrates the findings of our 3-step interpretative pathway, into a consistent whole, a key to creating convergence in multidisciplinary knowledge co-creation contexts. This framework proposes an original vision of sustainability which integrates the Triple Helix and the Triple Bottom Line models into a co-creation framework to support knowledge design and creation processes through which University-Industry-Government collaboration, necessary to address the challenge of a smarter and sustainable world, can be tried, tested and further developed. Research implications The paper opens up new research pathways launching a Science-led call for collaboration that overcomes the traditional divide between knowledge domains and communities fostering a shared effort to address the challenges of sustainability within a smarter planet and to put into practice interdisciplinary collaboration in order to develop a common framework for Service and Sustainability Sciences. Practical implications The paper provides insights for rethinking research, development and management approaches as well as education programs by placing sustainability at the center of the scientific, governmental and business agendas. It also sheds light on the criticalities and barriers of mutual learning systems. Originality/value The paper develops an original analytical approach that integrates the Triple Helix and the Triple Bottom Line models into a coherent co-creation framework for sustainability in which Service Science and Sustainability Science play key roles by integrating their knowledge resources.",
"title": ""
},
{
"docid": "d42c52a6127d72a513b8dc98f0932ea0",
"text": "We present a model that explains how established firms create breakthrough inventions. We identify three organizational pathologies that inhibit breakthrough inventions: the familiarity trap – favoring the familiar; the maturity trap – favoring the mature; and the propinquity trap – favoring search for solutions near to existing solutions. We argue that by experimenting with novel (i.e., technologies in which the firm lacks prior experience), emerging (technologies that are recent or newly developed in the industry), and pioneering (technologies that do not build on any existing technologies) technologies firms can overcome these traps and create breakthrough inventions. Empirical evidence from the chemicals industry supports our model. Copyright 2001 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "7072c7b94fc6376b13649ec748612705",
"text": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.",
"title": ""
},
{
"docid": "14d9343bbe4ad2dd4c2c27cb5d6795cd",
"text": "In the paper a method of translation applied in a new system TGT is discussed. TGT translates texts written in Polish into corresponding utterances in the Polish sign language. Discussion is focused on text-into-text translation phase. Proper translation is done on the level of a predicative representation of the sentence. The representation is built on the basis of syntactic graph that depicts the composition and mutual connections of syntactic groups, which exist in the sentence and are identified at the syntactic analysis stage. An essential element of translation process is complementing the initial predicative graph with nodes, which correspond to lacking sentence members. The method acts for primitive sentences as well as for compound ones, with some limitations, however. A translation example is given which illustrates main transformations done on the linguistic level. It is complemented by samples of images generated by the animating part of the system.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "759dc834e4e11668ad515a7b2a385c03",
"text": "In this paper, the authors address the significance and complexity of tokenization, the beginning step of NLP. Notions of word and token are discussed and defined from the viewpoints of lexicography and pragmatic implementation, respectively. Automatic segmentation of Chinese words is presented as an illustration of tokenization. Practical approaches to identification of compound tokens in English, such as idioms, phrasal verbs and fixed expressions, are developed.",
"title": ""
},
{
"docid": "6119cb6d8d20589dec218ce4ffe5d46f",
"text": "We describe a computer program which understands a greyscale image of a face well enough to locate individual face features such as eyes and mouth. The program has two distinct components: modules designed to locate particular face features, usually in a restricted area; and the overall control strategy which activates modules on the basis of the current solution state, and assesses and integrates the results of each module. Our main tool is statististical knowledge obtained by detailed measurements of many example faces. Once an initial location has been estimated, predictions about the positions of other features can be investigated. This can lead to a rapid increase in conndence as other features are identiied in their predicted positions, or alternatively to the initial identiication being quickly rejected as predictions are not connrmed. The program can be tuned either to return an accurate result, or to return fairly probable results very quickly. We describe results when working to high accuracy, in which the aim is to locate 40 pre-speciied feature points chosen for their use in indexing a mugshot database. A variant is presented designed simply to nd eye locations, working at close to video rates. We thank Roly Lishman for many valuable discussions on faces.",
"title": ""
},
{
"docid": "4922c751dded99ca83e19d51eb5d647e",
"text": "The viewpoint consistency constraint requires that the locations of all object features in an image must be consistent with projection from a single viewpoint. The application of this constraint is central to the problem of achieving robust recognition, since it allows the spatial information in an image to be compared with prior knowledge of an object's shape to the full degree of available image resolution. In addition, the constraint greatly reduces the size of the search space during model-based matching by allowing a few initial matches to provide tight constraints for the locations of other model features. Unfortunately, while simple to state, this constraint has seldom been effectively applied in model-based computer vision systems. This paper reviews the history of attempts to make use of the viewpoint consistency constraint and then describes a number of new techniques for applying it to the process of model-based recognition. A method is presented for probabilistically evaluating new potential matches to extend and refine an initial viewpoint estimate. This evaluation allows the model-based verification process to proceed without the expense of backtracking or search. It will be shown that the effective application of the viewpoint consistency constraint, in conjunction with bottom-up image description based upon principles of perceptual organization, can lead to robust three-dimensional object recognition from single gray-scale images.",
"title": ""
},
{
"docid": "ff7c790af7eaaea4bf3a354d21fd9189",
"text": "Among the large number of contributions concerning the localization techniques for wireless sensor networks (WSNs), there is still no simple, energy and cost efficient solution suitable in outdoor scenarios. In this paper, a technique based on antenna arrays and angle-ofarrival (AoA) measurements is carefully discussed. While the AoA algorithms are rarely considered for WSNs due to the large dimensions of directional antennas, some system configurations are investigated that can be easily incorporated in pocket-size wireless devices. A heuristic weighting function that enables decreasing the location errors is introduced. Also, the detailed performance analysis of the presented system is provided. The localization accuracy is validated through realistic Monte-Carlo simulations that take into account the specificity of propagation conditions in WSNs as well as the radio noise effects. Finally, trade-offs between the accuracy, localization time and the number of anchors in a network are addressed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
ee0e7a94d590fc0e16765a2d6af6db3c
|
3D Object Recognition in Cluttered Scenes With Robust Shape Description and Correspondence Selection
|
[
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
}
] |
[
{
"docid": "ca8d70248ef68c41f34eee375e511abf",
"text": "While mobile advertisement is the dominant source of revenue for mobile apps, the usage patterns of mobile users, and thus their engagement and exposure times, may be in conflict with the effectiveness of current ads. Users engagement with apps can range from a few seconds to several minutes, depending on a number of factors such as users' locations, concurrent activities and goals. Despite the wide-range of engagement times, the current format of ad auctions dictates that ads are priced, sold and configured prior to actual viewing, that is regardless of the actual ad exposure time.\n We argue that the wealth of easy-to-gather contextual information on mobile devices is sufficient to allow advertisers to make better choices by effectively predicting exposure time. We analyze mobile device usage patters with a detailed two-week long user study of 37 users in the US and South Korea. After characterizing application session times, we use factor analysis to derive a simple predictive model and show that is able to offer improved accuracy compared to mean session time over 90% of the time. We make the case for including predicted ad exposure duration in the price of mobile advertisements and posit that such information could significantly impact the effectiveness of mobile ads by giving publishers the ability to tune campaigns for engagement length, and enable a more efficient market for ad impressions while lowering network utilization and device power consumption.",
"title": ""
},
{
"docid": "32d235c450be47d9f5bca03cb3d40f82",
"text": "Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.",
"title": ""
},
{
"docid": "43b2912b6ad9824e3263ff9951daf0c2",
"text": "Monolingual alignment models have been shown to boost the performance of question answering systems by ”bridging the lexical chasm” between questions and answers. The main limitation of these approaches is that they require semistructured training data in the form of question-answer pairs, which is difficult to obtain in specialized domains or lowresource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer pairs from discourse structures. Our approach is driven by two representations of discourse: a shallow sequential representation, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We show that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.",
"title": ""
},
{
"docid": "e6c1ee4f1751d614b49977eedf08e834",
"text": "We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions’ contexts and that the expected reward is an arbitrary linear function of the contexts’ images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kerneldependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.",
"title": ""
},
{
"docid": "5bf8ef8658ff201a5e75e82a0ddaef60",
"text": "In the present work, we have used Tesseract 2.01 open source Optical Character Recognition (OCR) Engine under Apache License 2.0 for recognition of handwriting samples of lower case Roman script. Handwritten isolated and free-flow text samples were collected from multiple users. Tesseract is trained to recognize user-specific handwriting samples of both the categories of document pages. On a single user model, the system is trained with 1844 isolated handwritten characters and the performance is tested on 1133 characters, taken form the test set. The overall character-level accuracy of the system is observed as 83.5%. The system fails to segment 5.56% characters and erroneously classifies 10.94% characters.",
"title": ""
},
{
"docid": "a6226d78ea975a5028ca2419fed44af0",
"text": "We demonstrate a protocol for proving strongly that a black-box machine learning technique robustly predicts the future in dynamic, indefinite contexts. We propose necessary components of the proof protocol and demonstrate results visualizations to support evaluation of the proof components. Components include contemporaneously verifiable discrete predictions, deterministic computability of longitudinal predictions, imposition of realistic costs and domain constraints, exposure to diverse contexts, statistically significant excess benefits relative to a priori benchmarks and Monte Carlo trials, insignificant decay of excess benefits, pathology detection and an extended real-time trial \"in the wild.\" We apply the protocol to a big data machine learning technique deployed since 2011 that finds persistent, exploitable opportunities in many of 41 segments of US financial markets, the existence of which opportunities substantially contradict the Efficient Market Hypothesis.",
"title": ""
},
{
"docid": "01892c5a49afc92fcd82f6d8ecf8d921",
"text": "Intel SGX provisions shielded executions for securitysensitive computation, but lacks support for trusted system services (TSS), such as clock, network and filesystem. This makes enclaves vulnerable to Iago attacks [12] in the face of a powerful malicious system. To mitigate this problem, we present Aurora, a novel architecture that provides TSSes via a secure channel between enclaves and devices on top of an untrusted system, and implement two types of TSSes, i.e. clock and end-to-end network. We evaluate our solution by porting SQLite and OpenSSL into Aurora, experimental results show that SQLite benefits from a microsecond accuracy trusted clock and OpenSSL gains end-to-end secure network with about 1ms overhead.",
"title": ""
},
{
"docid": "8d92c2ec5c2372c7bb676ee7b8b0b511",
"text": "A 6-year-old boy was admitted to the emergency department (ED) suffering from petechiae and purpura on his face caused by a farming accident. He got his T-shirt caught in a rotating shaft at the back of a tractor. The T-shirt wrapped around his thorax and compressed him. He did not lose his consciousness during the incident. His score on the Glasgow Coma Scale was 15 and his initial vital signs were stable upon arrival at the ED. On physical examination, diffuse petechiae and purpura were noted on the face and neck although there was not any sign of the direct trauma (Figs. 1 and 2). The patient denied suffering head trauma. Examination for abdominal and thoracic organ injury was negative. Traumatic asphyxia is a rare condition presenting with cervicofacial cyanosis and edema, subconjunctival hemorrhage, and petechial hemorrhages of the face, neck, and upper chest that occurs due to a compressive force to the thoracoabdominal region [1]. Although the exact mechanism is controversial, it is probably due to thoracoabdominal compression causing increased intrathoracic pressure just at the moment of the event. The fear response, which is characterized by taking and holding a deep breath and closure of the glottis, also contributes to this process [1, 2]. This back pressure is transmitted ultimately to the head and neck veins and capillaries, with stasis and rupture producing characteristic petechial and subconjunctival hemorrhages [2]. The skin of the face, neck, and upper torso may appear blue-red to blue-black but it blanches over time. The discoloration and petechiae are often more prominent on the eyelids, nose, and lips [3]. In patients with traumatic asphyxia, injuries associated with other systems may also accompany the condition. Jongewaard et al. reported chest wall and intrathoracic injuries in 11 patients, loss of consciousness in 8, prolonged confusion in 5, seizures in 2, and visual disturbances in 2 of 14 patients with traumatic asphyxia [4]. Pulmonary contusion, hemothorax, pneumothorax, prolonged loss of consciousness, Int J Emerg Med (2009) 2:255–256 DOI 10.1007/s12245-009-0115-x",
"title": ""
},
{
"docid": "6ed1132aa216e15fe54e8524c9a4f8ee",
"text": "CONTEXT\nWith ageing populations, the prevalence of dementia, especially Alzheimer's disease, is set to soar. Alzheimer's disease is associated with progressive cerebral atrophy, which can be seen on MRI with high resolution. Longitudinal MRI could track disease progression and detect neurodegenerative diseases earlier to allow prompt and specific treatment. Such use of MRI requires accurate understanding of how brain changes in normal ageing differ from those in dementia.\n\n\nSTARTING POINT\nRecently, Henry Rusinek and colleagues, in a 6-year longitudinal MRI study of initially healthy elderly subjects, showed that an increased rate of atrophy in the medial temporal lobe predicted future cognitive decline with a specificity of 91% and sensitivity of 89% (Radiology 2003; 229: 691-96). WHERE NEXT? As understanding of neurodegenerative diseases increases, specific disease-modifying treatments might become available. Serial MRI could help to determine the efficacy of such treatments, which would be expected to slow the rate of atrophy towards that of normal ageing, and might also detect the onset of neurodegeneration. The amount and pattern of excess atrophy might help to predict the underlying pathological process, allowing specific therapies to be started. As the precision of imaging improves, the ability to distinguish healthy ageing from degenerative dementia should improve.",
"title": ""
},
{
"docid": "9e315cd14de8f7082be8b0a3160b6552",
"text": "Recently, the percentage of people with hypertension is increasing, and this phenomenon is widely concerned. At the same time, wireless home Blood Pressure (BP) monitors become accessible in people’s life. Since machine learning methods have made important contributions in different fields, many researchers have tried to employ them in dealing with medical problems. However, the existing studies for BP prediction are all based on clinical data with short time ranges. Besides, there do not exist works which can jointly make use of historical measurement data (e.g. BP and heart rate) and contextual data (e.g. age, gender, BMI and altitude). Recurrent Neural Networks (RNNs), especially those using Long Short-Term Memory (LSTM) units, can capture long range dependencies, so they are effective in modeling variable-length sequences. In this paper, we propose a novel model named recurrent models with contextual layer, which can model the sequential measurement data and contextual data simultaneously to predict the trend of users’ BP. We conduct our experiments on the BP data set collected from a type of wireless home BP monitors, and experimental results show that the proposed models outperform several competitive compared methods.",
"title": ""
},
{
"docid": "98d6f207b9b032cd90f3b565b9e94fea",
"text": "The usage of machine learning techniques for the prediction of financial time series is investigated. Both discriminative and generative methods are considered and compared to more standard financial prediction techniques. Generative methods such as Switching Autoregressive Hidden Markov and changepoint models are found to be unsuccessful at predicting daily and minutely prices from a wide range of asset classes. Committees of discriminative techniques (Support Vector Machines (SVM), Relevance Vector Machines and Neural Networks) are found to perform well when incorporating sophisticated exogenous financial information in order to predict daily FX carry basket returns. The higher dimensionality that Electronic Communication Networks make available through order book data is transformed into simple features. These volumebased features, along with other price-based ones motivated by common trading rules, are used by Multiple Kernel Learning (MKL) to classify the direction of price movement for a currency over a range of time horizons. Outperformance relative to both individual SVM and benchmarks is found, along with an indication of which features are the most informative for financial prediction tasks. Fisher kernels based on three popular market microstructural models are added to the MKL set. Two subsets of this full set, constructed from the most frequently selected and highest performing individual kernels are also investigated. Furthermore, kernel learning is employed optimising hyperparameter and Fisher feature parameters with the aim of improving predictive performance. Significant improvements in out-of-sample predictive accuracy relative to both individual SVM and standard MKL is found using these various novel enhancements to the MKL algorithm.",
"title": ""
},
{
"docid": "afb573f1b5c7e442b98b3214dd73406c",
"text": "This paper seeks to analyze the phenomenon of wartime rape and sexual torture of Croatian and Iraqi men and to explore the avenues for its prosecution under international humanitarian and human rights law. Male rape, in time of war, is predominantly an assertion of power and aggression rather than an attempt on the part of the perpetrator to satisfy sexual desire. The effect of such a horrible attack is to damage the victim's psyche, rob him of his pride, and intimidate him. In Bosnia- Herzegovina, Croatia, and Iraq, therefore, male rape and sexual torture has been used as a weapon of war with dire consequences for the victim's mental, physical, and sexual health. Testimonies collected at the Medical Centre for Human Rights in Zagreb and reports received from Iraq make it clear that prisoners in these conflicts have been exposed to sexual humiliation, as well as to systematic and systemic sexual torture. This paper calls upon the international community to combat the culture of impunity in both dictator-ruled and democratic countries by bringing the crime of wartime rape into the international arena, and by removing all barriers to justice facing the victims. Moreover, it emphasizes the fact that wartime rape is the ultimate humiliation that can be inflicted on a human being, and it must be regarded as one of the most grievous crimes against humanity. The international community has to consider wartime rape a crime of war and a threat to peace and security. It is in this respect that civilian community associations can fulfill their duties by encouraging victims of male rape to break their silence and address their socio-medical needs, including reparations and rehabilitation.",
"title": ""
},
{
"docid": "3111ef9867be7cf58be9694cbe2a14d9",
"text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.",
"title": ""
},
{
"docid": "617715cd1e7340e0cdc7e019756c7e14",
"text": "Learning over multi-view data is a challenging problem with strong practical applications. Most related studies focus on the classification point of view and assume that all the views are available at any time. We consider an extension of this framework in two directions. First, based on the BiGAN model, the Multi-view BiGAN (MV-BiGAN) is able to perform density estimation from multi-view inputs. Second, it can deal with missing views and is able to update its prediction when additional views are provided. We illustrate these properties on a set of experiments over different datasets.",
"title": ""
},
{
"docid": "ea9389944cb58004d05f220587eb5670",
"text": "Road lane detection and tracking methods are the state of the art in present driver assistance systems. However, lane detection methods that exploit the parallel processing capabilities of heterogeneous high performance computing devices such as FPGAs (or GPUs), a technology that potentially will replace ECUs in a coming generation of cars, are a rare subject of interest. In this thesis a road lane detection and tracking algorithm is developed and implemented, especially designed to incorporate one or many, and even heterogeneous, hardware accelerators. Road lane markings are detected and tracked with a Sequential Monte Carlo (SQR) method. Lane detection is done by populating a pre-processed gradient image with randomly sampled, straight lines. Each line is assigned a weight according to its position and the best positioned lines are used to represent the lane markings. Subsequently, lane tracking is performed with the help of a particle filter. The code was tested on three devices, one GPU the NVIDIA GeForce GTX 660 TI and two FPGAs the ALTERA Stratix V and the ALTERA Cyclone V SOC. The tests revealed a processing frame rate of up to 627 Hz on the GPU, 478 Hz on the Stratix V FPGA and 38 Hz on the Cyclone V SOC. They also showed a significant improvement in accuracy and robustness, a 2.4-4.6 times faster execution on the GPU, a 8.4-29.7 times faster execution on the Stratix V and a reduction of memory consumption by 71.94 % compared to a similar lane detection method. The algorithm was tested on different recorded videos, on independent benchmark datasets and in multiple test drives, confronting it with a wide range of scenarios, such as varying lighting conditions, presence of disturbing shadows or light beams and varying traffic densities. In all these scenarios the algorithm proved to be very robust to detect and track one or multiple lane markings.",
"title": ""
},
{
"docid": "037042318b99bf9c32831a6b25dcd50e",
"text": "Autoencoders are popular among neural-network-based matrix completion models due to their ability to retrieve potential latent factors from the partially observed matrices. Nevertheless, when training data is scarce their performance is significantly degraded due to overfitting. In this paper, we mitigate overfitting with a data-dependent regularization technique that relies on the principles of multi-task learning. Specifically, we propose an autoencoder-based matrix completion model that performs prediction of the unknown matrix values as a main task, and manifold learning as an auxiliary task. The latter acts as an inductive bias, leading to solutions that generalize better. The proposed model outperforms the existing autoencoder-based models designed for matrix completion, achieving high reconstruction accuracy in well-known datasets.",
"title": ""
},
{
"docid": "b250ac830e1662252069cc85128358a7",
"text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.",
"title": ""
},
{
"docid": "26a60d17d524425cfcfa92838ef8ea06",
"text": "This paper develops and tests a model of consumer trust in an electronic commerce vendor. Building consumer trust is a strategic imperative for web-based vendors because trust strongly influences consumer intentions to transact with unfamiliar vendors via the web. Trust allows consumers to overcome perceptions of risk and uncertainty, and to engage in the following three behaviors that are critical to the realization of a web-based vendor’s strategic objectives: following advice offered by the web vendor, sharing personal information with the vendor, and purchasing from the vendor’s web site. Trust in the vendor is defined as a multi-dimensional construct with two inter-related components—trusting beliefs (perceptions of the competence, benevolence, and integrity of the vendor), and trusting intentions—willingness to depend (that is, a decision to make oneself vulnerable to the vendor). Three factors are proposed for building consumer trust in the vendor: structural assurance (that is, consumer perceptions of the safety of the web environment), perceived web vendor reputation, and perceived web site quality. The model is tested in the context of a hypothetical web site offering legal advice. All three factors significantly influenced consumer trust in the web vendor. That is, these factors, especially web site quality and reputation, are powerful levers that vendors can use to build consumer trust, in order to overcome the negative perceptions people often have about the safety of the web environment. The study also demonstrates that perceived Internet risk negatively affects consumer intentions to transact with a web-based vendor. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
},
{
"docid": "6392a6c384613f8ed9630c8676f0cad8",
"text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN",
"title": ""
}
] |
scidocsrr
|
5abcc43722d09043d886168fa3c17eb8
|
Towards Highly Accurate and Stable Face Alignment for High-Resolution Videos
|
[
{
"docid": "79cffed53f36d87b89577e96a2b2e713",
"text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"title": ""
},
{
"docid": "f1deb9134639fb8407d27a350be5b154",
"text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"title": ""
}
] |
[
{
"docid": "991420a2abaf1907ab4f5a1c2dcf823d",
"text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.",
"title": ""
},
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "a86dac3d0c47757ce8cad41499090b8e",
"text": "We propose a theory of regret regulation that distinguishes regret from related emotions, specifies the conditions under which regret is felt, the aspects of the decision that are regretted, and the behavioral implications. The theory incorporates hitherto scattered findings and ideas from psychology, economics, marketing, and related disciplines. By identifying strategies that consumers may employ to regulate anticipated and experienced regret, the theory identifies gaps in our current knowledge and thereby outlines opportunities for future research.",
"title": ""
},
{
"docid": "aa7029c5e29a72a8507cbcb461ef92b0",
"text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.",
"title": ""
},
{
"docid": "9e452c36ed7abfa6289568165b59ad30",
"text": "This paper presents an approach to classify the heights of targets with radar systems. The algorithm is based on the analysis of the superposition of several reflections at a high target. The result is a superposition of the different received signals. On the contrary to a high target, a low target has only one reflection point and one propagation path. In this paper, a technique is proposed to detect the superposition and consequently classify targets as low or high. Finally the algorithm is evaluated with measurements.",
"title": ""
},
{
"docid": "a60b1045e2344cf2ab8db6038cbdeb4d",
"text": "The study of the interactions between plants and their microbial communities in the rhizosphere is important for developing sustainable management practices and agricultural products such as biofertilizers and biopesticides. Plant roots release a broad variety of chemical compounds to attract and select microorganisms in the rhizosphere. In turn, these plantassociated microorganisms, via different mechanisms, influence plant health and growth. In this review, we summarize recent progress made in unraveling the interactions between plants and rhizosphere microbes through plant root exudates, focusing on how root exudate compounds mediate rhizospheric interactions both at the plant–microbe and plant–microbiome levels. We also discuss the potential of root exudates for harnessing rhizospheric interactions with microbes that could lead to sustainable agricultural practices.",
"title": ""
},
{
"docid": "642b98bf1ea22958411514cb7f01ef68",
"text": "This paper studies the problems of vehicle make & model classification. Some of the main challenges are reaching high classification accuracy and reducing the annotation time of the images. To address these problems, we have created a fine-grained database using online vehicle marketplaces of Turkey. A pipeline is proposed to combine an SSD (Single Shot Multibox Detector) model with a CNN (Convolutional Neural Network) model to train on the database. In the pipeline, we first detect the vehicles by following an algorithm which reduces the time for annotation. Then, we feed them into the CNN model. It is reached approximately 4% better classification accuracy result than using a conventional CNN model. Next, we propose to use the detected vehicles as ground truth bounding box (GTBB) of the images and feed them into an SSD model in another pipeline. At this stage, it is reached reasonable classification accuracy result without using perfectly shaped GTBB. Lastly, an application is implemented in a use case by using our proposed pipelines. It detects the unauthorized vehicles by comparing their license plate numbers and make & models. It is assumed that license plates are readable.",
"title": ""
},
{
"docid": "a2799e0cee6ca6d7f6b0cc230957b56b",
"text": "We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.",
"title": ""
},
{
"docid": "e6da9e3f8af84139076d30a439da7a18",
"text": "Monocular simultaneous localization and mapping (SLAM) is a key enabling technique for many augmented reality (AR) applications. However, conventional methods for monocular SLAM can obtain only sparse or semi-dense maps in highly-textured image areas. Poorly-textured regions which widely exist in indoor and man-made urban environments can be hardly reconstructed, impeding interactions between virtual objects and real scenes in AR apps. In this paper,we present a novel method for real-time monocular dense mapping based on the piecewise planarity assumption for poorly textured regions. Specifically, a semi-dense map for highly-textured regions is first calculated by pixel matching and triangulation [6, 7]. Large textureless regions extracted by Maximally Stable Color Regions (MSCR) [11], which is a homogeneous-color region detector, are approximated using piecewise planar models which are estimated by the corresponding semi-dense 3D points and the proposed multi-plane segmentation algorithm. Plane models associated with the same 3D area across multiple overlapping views are linked and fused to ensure a consistent and accurate 3D reconstruction. Experimental results on two public datasets [15, 23] demonstrate that our method is 2.3X~2.9X faster than the state-of-the-art method DPPTAM [2], and meanwhile achieves better reconstruction accuracy and completeness. We also apply our method to a real AR application and live experiments with a hand-held camera demonstrate the effectiveness and efficiency of our method in practical scenario.",
"title": ""
},
{
"docid": "e10d9cca7bdb4b8f038d3dbc260d0e3f",
"text": "An important goal of visualization technology is to support the e xploration and analysis of very large amounts of data. In this paper , we describe a set of pix eloriented visualization techniques which use each pix el of the display to visualize one data v alue and therefore allo w the visualization of the lar gest amount of data possible. Most of the techniques ha ve been specifically designed for visualizing and querying lar ge databases. The techniques may be di vided into query-independent techniques which directly visualize the data (or a certain portion of it) and query-dependent techniques which visualize the data in the conte xt f a specific query. Examples for the class of query-independent techniques are the screen-filling curve and recursi ve pattern techniques. The screen-filling curv e techniques are based on the well-kno wn Morton and Peano-Hilbert curv e algorithms, and the recursive pattern technique is based on a generic recursi ve scheme which generalizes a wide range of pix el-oriented arrangements for visualizing lar ge data sets. Examples for the class of query-dependent techniques are the snak piral and snak eaxes techniques, which visualize the distances with respect to a database query and arrange the most rele vant data items in the center of the display . Beside describing the basic ideas of our techniques, we pro vide example visualizations generated by the various techniques, which demonstrate the usefulness of our techniques and show some of their adv antages and disadv antages.",
"title": ""
},
{
"docid": "793edca657c68ade4d2391c23f585c41",
"text": "In the linear bandit problem a learning agent chooses an arm at each round and receives a stochastic reward. The expected value of this stochastic reward is an unknown linear function of the arm choice. As is standard in bandit problems, a learning agent seeks to maximize the cumulative reward over an n round horizon. The stochastic bandit problem can be seen as a special case of the linear bandit problem when the set of available arms at each round is the standard basis ei for the Euclidean space R, i.e. the vector ei is a vector with all 0s except for a 1 in the ith coordinate. As a result each arm is independent of the others and the reward associated with each arm depends only on a single parameter as is the case in stochastic bandits. The underlying algorithmic approach to solve this problem uses the optimism in the face of uncertainty (OFU) principle. The OFU principle solves the exploration-exploitation tradeoff in the linear bandit problem by maintaining a confidence set for the vector of coefficients of the linear function that governs rewards. In each round the algorithm chooses an estimate of the coefficients of the linear function from the confidence set and then takes an action so that the predicted reward is maximized. The problem reduces to constructing confidence sets for the vector of coefficients of the linear function based on the action-reward pairs observed in the past time steps. The linear bandit problem was first studied by Auer et al. (2002) [1] under the name of linear reinforcement learning. Since the introduction of the problem, several works have improved the analysis and explored variants of the problem. The most influential works include Dani et al. (2008) [2], Rusmevichientong et al. (2010) [3], and Abbasi et al. (2011) [4]. In each of these works the set of available arms remains constant, but the set is only restricted to being a bounded subset of a finite-dimensional vector space. Variants of the problem formulation have also been widely applied to recommendation systems following the work of Li et al. (2010) [5] within the context of web advertisement. An important property of this problem is that the arms are not independent because future arm choices depend on the confidence sets constructed from past choices. In the literature, several works including [5] have failed to recognize this property leading to faulty analysis. This fine detail requires special care which we explore in depth in Section 2.",
"title": ""
},
{
"docid": "9bff76e87f4bfa3629e38621060050f7",
"text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.",
"title": ""
},
{
"docid": "babdf14e560236f5fcc8a827357514e5",
"text": "Email: zettammanal@gmail.com Abstract: The NP-hard (complete) team orienteering problem is a particular vehicle routing problem with the aim of maximizing the profits gained from visiting control points without exceeding a travel cost limit. The team orienteering problem has a number of applications in several fields such as athlete recruiting, technician routing and tourist trip. Therefore, solving optimally the team orienteering problem would play a major role in logistic management. In this study, a novel randomized population constructive heuristic is introduced. This heuristic constructs a diversified initial population for population-based metaheuristics. The heuristics proved its efficiency. Indeed, experiments conducted on the well-known benchmarks of the team orienteering problem show that the initial population constructed by the presented heuristic wraps the best-known solution for 131 benchmarks and good solutions for a great number of benchmarks.",
"title": ""
},
{
"docid": "9a1e0edc4d5eb8a2cbf7fa0c6640f0bc",
"text": "The classical SVM is an optimization problem minimizing the hinge losses of mis-classified samples with the regularization term. When the sample size is small or data has noise, it is possible that the classifier obtained with training data may not generalize well to population, since the samples may not accurately represent the true population distribution. We propose a distributionally-robust framework for Support Vector Machines (DR-SVMs). We build an ambiguity set for the population distribution based on samples using the Kantorovich metric. DR-SVMs search the classifier that minimizes the sum of regularization term and the hinge loss function for the worst-case population distribution among the ambiguity set. We provide semi-infinite programming formulation of the DR-SVMs and propose a cutting-plane algorithm to solve the problem. Computational results on simulated data and real data from University of California, Irvine Machine Learning Repository show that the DR-SVMs outperform the SVMs in terms of the Area Under Curve (AUC) measures on several test problems.",
"title": ""
},
{
"docid": "154ab0cbc1dfa3c4bae8a846f800699e",
"text": "This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance. Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system. Keywords—extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback",
"title": ""
},
{
"docid": "a9acc36ae78a12fbf19e8590e931e6f8",
"text": "Deep learning models are susceptible to input specific noise, called adversarial perturbations. Moreover, there exist input-agnostic noise, called Universal Adversarial Perturbations (UAP) that can affect inference of the models over most input samples. Given a model, there exist broadly two approaches to craft UAPs: (i) data-driven: that require data, and (ii) data-free: that do not require data samples. Data-driven approaches require actual samples from the underlying data distribution and craft UAPs with high success (fooling) rate. However, data-free approaches craft UAPs without utilizing any data samples and therefore result in lesser success rates. In this paper, for data-free scenarios, we propose a novel approach that emulates the effect of data samples with class impressions in order to craft UAPs using data-driven objectives. Class impression for a given pair of category and model is a generic representation (in the input space) of the samples belonging to that category. Further, we present a neural network based generative model that utilizes the acquired class impressions to learn crafting UAPs. Experimental evaluation demonstrates that the learned generative model, (i) readily crafts UAPs via simple feed-forwarding through neural network layers, and (ii) achieves state-of-the-art success rates for data-free scenario and closer to that for data-driven setting without actually utilizing any data samples.",
"title": ""
},
{
"docid": "665da3a85a548d12864de5fad517e3ee",
"text": "To characterize the neural correlates of being personally involved in social interaction as opposed to being a passive observer of social interaction between others we performed an fMRI study in which participants were gazed at by virtual characters (ME) or observed them looking at someone else (OTHER). In dynamic animations virtual characters then showed socially relevant facial expressions as they would appear in greeting and approach situations (SOC) or arbitrary facial movements (ARB). Differential neural activity associated with ME>OTHER was located in anterior medial prefrontal cortex in contrast to the precuneus for OTHER>ME. Perception of socially relevant facial expressions (SOC>ARB) led to differentially increased neural activity in ventral medial prefrontal cortex. Perception of arbitrary facial movements (ARB>SOC) differentially activated the middle temporal gyrus. The results, thus, show that activation of medial prefrontal cortex underlies both the perception of social communication indicated by facial expressions and the feeling of personal involvement indicated by eye gaze. Our data also demonstrate that distinct regions of medial prefrontal cortex contribute differentially to social cognition: whereas the ventral medial prefrontal cortex is recruited during the analysis of social content as accessible in interactionally relevant mimic gestures, differential activation of a more dorsal part of medial prefrontal cortex subserves the detection of self-relevance and may thus establish an intersubjective context in which communicative signals are evaluated.",
"title": ""
},
{
"docid": "a423435c1dc21c33b93a262fa175f5c5",
"text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.",
"title": ""
},
{
"docid": "91c7a22694ec8ae4d8ca5ad3147fb11e",
"text": "The binary-weight CNN is one of the most efficient solutions for mobile CNNs. However, a large number of operations are required to process each image. To reduce such a huge operation count, we propose an energy-efficient kernel decomposition architecture, based on the observation that a large number of operations are redundant. In this scheme, all kernels are decomposed into sub-kernels to expose the common parts. By skipping the redundant computations, the operation count for each image was consequently reduced by 47.7%. Furthermore, a low cost bit-width quantization technique was implemented by exploiting the relative scales of the feature data. Experimental results showed that the proposed architecture achieves a 22% energy reduction.",
"title": ""
},
{
"docid": "1b2c561b6aea994ef50b713f0b5286a1",
"text": "This paper presents a novel system architecture applicable to high-performance and flexible transport data processing which includes complex protocol operation and a nehvork control algorithm. We developed a new tightly coupled Held Programmable Gate Array (FPGA) and Micro-Processing Unit (MPU) system named. Yet Another Re-Definable System (YARDS). It comprises three programmable devices which equateto high flexibility. These devices are the RISC-type MPU with memories, programmable inter-connection devices, and FPGAs. Using these, this system supports various styles of coupling between the FPGAs and the MPU which are suitable for constructing transport data processing. In this paper, two applications of the systemin the telecommunications field are given. One is an Operation, Administration, and Management (OAM) cell operations on an AsynchronousTransfer Mode (ATM) network. The other is a dynamic configuration protocol enables the updateor changeof the functions of the transport data processing system on-line. This is the first approach applying the FPGA/MPU hybrid system to the telecommunications field.",
"title": ""
}
] |
scidocsrr
|
de8c61f4dc43dab74528687195767c32
|
Measuring social spam and the effect of bots on information diffusion in social media
|
[
{
"docid": "a57aa209d93c38f4b9e8e5f42158320f",
"text": "While most online social media accounts are controlled by humans, these platforms also host automated agents called social bots or sybil accounts. Recent literature reported on cases of social bots imitating humans to manipulate discussions, alter the popularity of users, pollute content and spread misinformation, and even perform terrorist propaganda and recruitment actions. Here we present BotOrNot, a publicly-available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots. Since its release in May 2014, BotOrNot has served over one million requests via our website and APIs.",
"title": ""
},
{
"docid": "04542b45587b0b0a6a79a41e1244cd80",
"text": "We study the change in polarization of hashtags on Twitter over time and show that certain jumps in polarity are caused by \"hijackers\" engaged in a particular type of hashtag war.",
"title": ""
}
] |
[
{
"docid": "e25c8e9494b9e093872a33084ae3d144",
"text": "When people interact with communication robots in daily life, their attitudes and emotions toward the robots affect their behavior. From the perspective of robotics design, we need to investigate the influences of these attitudes and emotions on human-robot interaction. This paper reports our empirical study on the relationships between people's attitudes and emotions, and their behavior toward a robot. In particular, we focused on negative attitudes, anxiety, and communication avoidance behavior, which have important implications for robotics design. For this purpose, we used two psychological scales that we had developed: negative attitudes toward robots scale (NARS) and robot anxiety scale (RAS). In the experiment, subjects and a humanoid robot are engaged in simple interactions including scenes of meeting, greeting, self-disclosure, and physical contact. Experimental results indicated that there is a relationship between negative attitudes and emotions, and communication avoidance behavior. A gender effect was also suggested.",
"title": ""
},
{
"docid": "9ec39badc92094783fcaaa28c2eb2f7a",
"text": "In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.",
"title": ""
},
{
"docid": "4cc8aef365b8be6288a8beef75b1180d",
"text": "Stories are used extensively for human communication; both the comprehension and production of oral and written narratives constitute a fundamental part of our experience. While study of this topic has largely been the domain of cognitive psychology, neuroscience has also made progress in uncovering the processes underlying these abilities. In an attempt to synthesize work from both literatures, this review: (1) summarizes the current neuroimaging and patient research pertaining to narrative comprehension and production, (2) attempts to integrate this information with the processes described by the discourse models of cognitive psychology, and (3) uses this information to examine the possible interrelation between comprehension and production. Story comprehension appears to entail a network of frontal, temporal and cingulate areas that support working-memory and theory-of-mind processes. The specific functions associated with these areas are congruent with the processes proposed by cognitive models of comprehension. Moreover, these same areas appear necessary for story production, and the causal-temporal ordering of selected information may partially account for this common ground. A basic description of comprehension and production based solely on neuropsychological evidence is presented to complement current cognitive models, and a number of avenues for future research are suggested.",
"title": ""
},
{
"docid": "10d08867d54d4938efc1797467143920",
"text": "This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We also propose an efficient recurrent neural network for performing inference with the learned image-embedding. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.",
"title": ""
},
{
"docid": "5cdb945589f528d28fe6d0dce360a0e1",
"text": "Bankruptcy prediction has been a subject of interests for almost a century and it still ranks high among hottest topics in economics. The aim of predicting financial distress is to develop a predictive model that combines various econometric measures and allows to foresee a financial condition of a firm. In this domain various methods were proposed that were based on statistical hypothesis testing, statistical modelling (e.g., generalized linear models), and recently artificial intelligence (e.g., neural networks, Support Vector Machines, decision tress). In this paper, we propose a novel approach for bankruptcy prediction that utilizes Extreme Gradient Boosting for learning an ensemble of decision trees. Additionally, in order to reflect higher-order statistics in data and impose a prior knowledge about data representation, we introduce a new concept that we refer as to synthetic features. A synthetic feature is a combination of the econometric measures using arithmetic operations (addition, subtraction, multiplication, division). Each synthetic feature can be seen as a single regression model that is developed in an evolutionary manner. We evaluate our solution using the collected data about Polish companies in five tasks corresponding to the bankruptcy prediction in the 1st, 2nd, 3rd, 4th, and 5th year. We compare our approach with the reference methods. ∗Corresponding author, Tel.: (+48) 71 320 44 53. Email addresses: maciej.zieba@pwr.edu.pl (Maciej Zięba ), sebastian.tomczak@pwr.edu.pl (Sebastian K. Tomczak), jakub.tomczak@pwr.edu.pl (Jakub M. Tomczak) Preprint submitted to Expert Systems with Applications April 4, 2016",
"title": ""
},
{
"docid": "a13d1144c4a719b1d6d5f4f0e645c2e3",
"text": "Array antennas for 77GHz automotive radar application are designed and measured. Linear series-fed patch array (SFPA) antenna is designed for transmitters of middle range radar (MRR) and all the receivers. A planar SFPA based on the linear one and substrate integrated waveguide (SIW) feeding network is proposed for transmitter of long range radar (LRR), which can decline the radiation from feeding network itself. The array antennas are fabricated, both the performances with and without radome of these array antennas are measured. Good agreement between simulation and measurement has been achieved. They can be good candidates for 77GHz automotive application.",
"title": ""
},
{
"docid": "57752057b1665cec9433aa3fe055be1e",
"text": "BACKGROUND\nPlacebo treatment can significantly influence subjective symptoms. However, it is widely believed that response to placebo requires concealment or deception. We tested whether open-label placebo (non-deceptive and non-concealed administration) is superior to a no-treatment control with matched patient-provider interactions in the treatment of irritable bowel syndrome (IBS).\n\n\nMETHODS\nTwo-group, randomized, controlled three week trial (August 2009-April 2010) conducted at a single academic center, involving 80 primarily female (70%) patients, mean age 47 ± 18 with IBS diagnosed by Rome III criteria and with a score ≥ 150 on the IBS Symptom Severity Scale (IBS-SSS). Patients were randomized to either open-label placebo pills presented as \"placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes\" or no-treatment controls with the same quality of interaction with providers. The primary outcome was IBS Global Improvement Scale (IBS-GIS). Secondary measures were IBS Symptom Severity Scale (IBS-SSS), IBS Adequate Relief (IBS-AR) and IBS Quality of Life (IBS-QoL).\n\n\nFINDINGS\nOpen-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2 ± 1.0 vs. 4.0 ± 1.1, p<.001) and at 21-day endpoint (5.0 ± 1.5 vs. 3.9 ± 1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).\n\n\nCONCLUSION\nPlacebos administered without deception may be an effective treatment for IBS. Further research is warranted in IBS, and perhaps other conditions, to elucidate whether physicians can benefit patients using placebos consistent with informed consent.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01010191.",
"title": ""
},
{
"docid": "76383091c5eb5acd0976c41dc25cc0b2",
"text": "(2003). Towards a taxonomy of a set of discourse markers in dialog: a theoretical and computational linguistic account. Abstract Discourse markers are verbal and non-verbal devices that mark transition points in communication. They presumably facilitate the construction of a mental representation of the events described by the discourse. A taxonomy of these relational markers is one important beginning in investigations of language use. While several taxonomies of coherence relations have been proposed for monolog, only a few have been proposed for dialog. This paper presents a taxonomy of between-turn coherence relations in dialog and discusses several issues that arise out of constructing such a taxonomy. A large number of discourse markers was sampled from the Santa Barbara Corpus of Spoken American English. Two judges substituted each type of these markers for all other markers. This extensive substitution test determined whether hyponymous, hypernymous and synonymous relations existed between the markers from this corpus of dialogs. Evidence is presented for clustering coherence relations into four categories: direction, polarity, acceptance and empathics. language is the act of communication that normally is coordinated between its participants. The speaker or writer of a message needs to coordinate when to say what, what to say to whom, how and why to say it. In writing this is often difficult because the hearer is not simultaneously present in the communicative act 1. In dialog, speakers have the advantage that hearers are present They know whether they have the hearer \" s attention, whom they are talking to, when they can start and stop speaking, and what they can say. Hearers generally give clues on each of these aspects by providing feedback. Coordination between speakers and hearers consists of multifaceted tasks between the parties involved. For instance, speakers need to monitor whether hearers are attending to what is said (is the hearer making eye contact?), who they are talking to (is the hearer an authority?), when they are speaking (is there a pause in the conversation which allows the speaker to start speaking?), what to say (how to express a meaningful information?), and whether the speaker needs to follow up on an earlier piece of information (is there anything that is by convention expected from the speaker based on previous pieces of information). This makes dialog a very dynamic act of coordination. Take for instance the following dialog in the Santa Barbara Corpus for Spoken American English (SBSAE) …",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "6b44bd202f964033a2a2433d6322f160",
"text": "We apply convolutional neural networks (CNN) to the problem of image orientation detection in the context of determining the correct orientation (from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is especially important for digitazing analog photographs. We substantially improve on the published state of the art in terms of the performance on one of the standard datasets, and test our system on a more difficult large dataset of consumer photos. We use Guided Backpropagation to obtain insights into how our CNN detects photo orientation, and to explain its mistakes.",
"title": ""
},
{
"docid": "2fc2234e6f8f70e0b12f1f72b1d21175",
"text": "Servers and HPC systems often use a strong memory error correction code, or ECC, to meet their reliability and availability requirements. However, these ECCs often require significant capacity and/or power overheads. We observe that since memory channels are independent from one another, error correction typically needs to be performed for one channel at a time. Based on this observation, we show that instead of always storing in memory the actual ECC correction bits as do existing systems, it is sufficient to store the bitwise parity of the ECC correction bits of different channels for fault-free memory regions, and store the actual ECC correction bits only for faulty memory regions. By trading off the resultant ECC capacity overhead reduction for improved memory energy efficiency, the proposed technique reduces memory energy per instruction by 54.4% and 20.6%, respectively, compared to a commercial chipkill correct ECC and a DIMM-kill correct ECC, while incurring similar or lower capacity overheads.",
"title": ""
},
{
"docid": "48703205408e6ebd8f8fc357560acc41",
"text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.",
"title": ""
},
{
"docid": "f60f75d03c06842efcb2454536ec8226",
"text": "The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users.",
"title": ""
},
{
"docid": "d053f8b728f94679cd73bc91193f0ba6",
"text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.",
"title": ""
},
{
"docid": "c7c462e6c0575bef245d1d52ce456cfd",
"text": "It is often difficult to visualize large networks effectively. In BioReact, we filter large systems biology network data by querying to select partial network as the input for visualization. Each query is parameterized by a node name, the direction of graph search, and the scope of the search. We present two layouts of the same network to clearly show network topology: a force-directed layout expands neighbouring nodes to maiximize spatial separation between nodes and links, and a downward edge layout to preserve a sense of unidirectional flow. Navigation of the network such as locating a particular node/link and linked highlighting between multiple views optimize user experience.",
"title": ""
},
{
"docid": "ea937e1209c270a7b6ab2214e0989fed",
"text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.",
"title": ""
},
{
"docid": "5dd790f34fec2f4adc52971c39e55d6b",
"text": "Although within SDN community, the notion of logically centralized network control is well understood and agreed upon, many different approaches exist on how one should deliver such a logically centralized view to multiple distributed controller instances. In this paper, we survey and investigate those approaches. We discover that we can classify the methods into several design choices that are trending among SDN adopters. Each design choice may influence several SDN issues such as scalability, robustness, consistency, and privacy. Thus, we further analyze the pros and cons of each model regarding these matters. We conclude that each design begets some characteristics. One may excel in resolving one issue but perform poor in another. We also present which design combinations one should pick to build distributed controller that is scalable, robust, consistent",
"title": ""
},
{
"docid": "c1e8d64c7caf54b7265acf45c56fae74",
"text": "Using a novel single-molecule PCR approach to quantify the total burden of mitochondrial DNA (mtDNA) molecules with deletions, we show that a high proportion of individual pigmented neurons in the aged human substantia nigra contain very high levels of mtDNA deletions. Molecules with deletions are largely clonal within each neuron; that is, they originate from a single deleted mtDNA molecule that has expanded clonally. The fraction of mtDNA deletions is significantly higher in cytochrome c oxidase (COX)-deficient neurons than in COX-positive neurons, suggesting that mtDNA deletions may be directly responsible for impaired cellular respiration.",
"title": ""
},
{
"docid": "4ae82b3362756b0efed84596076ea6fb",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
},
{
"docid": "0c975acb5ab3f413078171840b17b232",
"text": "We have analysed associated factors in 164 patients with acute compartment syndrome whom we treated over an eight-year period. In 69% there was an associated fracture, about half of which were of the tibial shaft. Most patients were men, usually under 35 years of age. Acute compartment syndrome of the forearm, with associated fracture of the distal end of the radius, was again seen most commonly in young men. Injury to soft tissues, without fracture, was the second most common cause of the syndrome and one-tenth of the patients had a bleeding disorder or were taking anticoagulant drugs. We found that young patients, especially men, were at risk of acute compartment syndrome after injury. When treating such injured patients, the diagnosis should be made early, utilising measurements of tissue pressure.",
"title": ""
}
] |
scidocsrr
|
25448cb838a35b7e75e313c8d8590783
|
TiQi: answering unstructured natural language trace queries
|
[
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
}
] |
[
{
"docid": "6c33b0ab7860b0691b46637eec31c4eb",
"text": "Fascia iliaca block or femoral nerve block is used frequently in hip fracture patients because of their opioid-sparing effects and reduction in opioid-related adverse effects. A recent anatomical study on hip innervation led to the identification of relevant landmarks to target the hip articular branches of femoral nerve and accessory obturator nerve. Using this information, we developed a novel ultrasound-guided approach for blockade of these articular branches to the hip, the PENG (PEricapsular Nerve Group) block. In this report, we describe the technique and its application in 5 consecutive patients.",
"title": ""
},
{
"docid": "0e8b0f883687e66d38fcaa2add4cc3d2",
"text": "The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange. Use of any other trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Abstract: The Hartstone benchmark is a set of timing requirements for testing a system's ability to handle hard real-time applications. It is specified as a set of processes with well-defined workloads and timing constraints. The name Hartstone derives from HArd Real Time and the fact that the workloads are presently based on the well-known Whetstone benchmark. This report describes the structure and behavior of an implementation in the Ada programming language of one category of Hartstone requirements , the Periodic Harmonic (PH) Test Series. The Ada implementation of the PH series is aimed primarily at real-time embedded processors where the only executing code is the benchmark and the Ada runtime system. Guidelines for performing various Hartstone experiments and interpreting the results are provided. Also included are the source code listings of the benchmark, information on how to obtain the source code in machine-readable form, and some sample results for Version 1.0 of the Systems Designers XD Ada VAX/VMS-MC68020 cross-compiler.",
"title": ""
},
{
"docid": "5f7aa812dc718de9508b083320c67e8a",
"text": "High power multi-level converters are deemed as the mainstay power conversion technology for renewable energy systems including the PV farm, energy storage system and electrical vehicle charge station. This paper is focused on the modeling and design of coupled and integrated magnetics in three-level DC/DC converter with multi-phase interleaved structure. The interleaved phase legs offer the benefit of output current ripple reduction, while inversed coupled inductors can suppress the circulating current between phase legs. To further reduce the magnetic volume, the four inductors in two-phase three-level DC/DC converter are integrated into one common structure, incorporating the negative coupling effects. Because of the nonlinearity of the inductor coupling, the equivalent circuit model is developed for the proposed interleaving structure to facilitate the design optimization of the integrated system. The model identifies the existence of multiple equivalent inductances during one switching cycle. A combination of them determines the inductor current ripple and dynamics of the system. By virtue of inverse coupling and means of controlling the coupling coefficients, one can minimize the current ripple and the unwanted circulating current. The fabricated prototype of the integrated coupled inductors is tested with a two-phase three-level DC/DC converter hardware, showing its good current ripple reduction performance as designed.",
"title": ""
},
{
"docid": "771ddd19549c46ecfb50ee96bdcc3dfa",
"text": "A metamaterial 1:4 series power divider that provides equal power split to all four output ports over a large bandwidth is presented, which can be extended to an arbitrary number of output ports. The divider comprises four nonradiating metamaterial lines in series, incurring a zero insertion phase over a large bandwidth, while simultaneously maintaining a compact length of /spl lambda//sub 0//8. Compared to a series power divider employing conventional one-wavelength long meandered transmission lines to provide in-phase signals at the output ports, the metamaterial divider provides a 165% increase in the input return-loss bandwidth and a 155% and 154% increase in the through-power bandwidth to ports 3 and 4, respectively. In addition, the metamaterial divider is significantly more compact, occupying only 2.6% of the area that the transmission line divider occupies. The metamaterial and transmission line dividers exhibit comparable insertion losses.",
"title": ""
},
{
"docid": "6e70435f2d434581f00962b5677facfa",
"text": "Many institutions of Higher Education and Corporate Training Institutes are resorting to e-Learning as a means of solving authentic learning and performance problems, while other institutions are hopping onto the bandwagon simply because they do not want to be left behind. Success is crucial because an unsuccessful effort to implement e-Learning will be clearly reflected in terms of the return of investment. One of the most crucial prerequisites for successful implementation of e-Learning is the need for careful consideration of the underlying pedagogy, or how learning takes place online. In practice, however, this is often the most neglected aspect in any effort to implement e-Learning. The purpose of this paper is to identify the pedagogical principles underlying the teaching and learning activities that constitute effective e-Learning. An analysis and synthesis of the principles and ideas by the practicing e-Learning company employing the author will also be presented, in the perspective of deploying an effective Learning Management Systems (LMS). D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "fd4bae3bcb2a388e7203fc6c2f9cde6c",
"text": "Sign language recognition is helpful in communication between signing people and non-signing people. Various research projects are in progress on different sign language recognition systems worldwide. The research is limited to a particular country as there are country wide variations available. The idea of this project is to design a system that can interpret the Indian sign language in the domain of numerals accurately so that the less fortunate people will be able to communicate with the outside world without need of an interpreter in public places like railway stations, banks, etc. The research presented here describes a system for automatic recognition of Indian sign language of numeric signs which are in the form of isolated images, in which only a regular camera was used to acquire the signs. To use the project in real environment, first we created a numeric sign database containing 5000 signs, 500 images per numeral sign. Direct pixel value and hierarchical centroid techniques are used to extract desired features from sign images. After extracting features from images, neural network and kNN classification techniques were used to classify the signs. The result of these experiments is achieved up to 97.10% accuracy.",
"title": ""
},
{
"docid": "056f9496de2911ac3d41f7e03a2e6f76",
"text": "This paper presents a survey on the role of negationin sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis. We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects, such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.",
"title": ""
},
{
"docid": "7533347e8c5daf17eb09e64db0fa4394",
"text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature",
"title": ""
},
{
"docid": "7c8105cf417c4f0da6f8a2356c6fb5ba",
"text": "In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance. Inspired by the way humans utilize semantic knowledge between objects of interests, we propose a framework that incorporates knowledge graphs for describing the relationships between multiple labels. Our model learns an information propagation mechanism from the semantic label space, which can be applied to model the interdependencies between seen and unseen class labels. With such investigation of structured knowledge graphs for visual reasoning, we show that our model can be applied for solving multi-label classification and ML-ZSL tasks. Compared to state-of-the-art approaches, comparable or improved performances can be achieved by our method.",
"title": ""
},
{
"docid": "90d4c7fb5addd3123746f64fe6ed96f7",
"text": "As a trust machine, blockchain was recently introduced to the public to provide an immutable, consensus based and transparent system in the Fintech field. However, there are ongoing efforts to apply blockchain to other fields where trust and value are essential. In this paper, we suggest Gcoin blockchain as the base of the data flow of drugs to create transparent drug transaction data. Additionally, the regulation model of the drug supply chain could be altered from the inspection and examination only model to the surveillance net model, and every unit that is involved in the drug supply chain would be able to participate simultaneously to prevent counterfeit drugs and to protect public health, including patients.",
"title": ""
},
{
"docid": "9cd00d9975c1efa741d1b01200a7d660",
"text": "BACKGROUND\nMany ethical problems exist in nursing homes. These include, for example, decision-making in end-of-life care, use of restraints and a lack of resources.\n\n\nAIMS\nThe aim of the present study was to investigate nursing home staffs' opinions and experiences with ethical challenges and to find out which types of ethical challenges and dilemmas occur and are being discussed in nursing homes.\n\n\nMETHODS\nThe study used a two-tiered approach, using a questionnaire on ethical challenges and systematic ethics work, given to all employees of a Norwegian nursing home including nonmedical personnel, and a registration of systematic ethics discussions from an Austrian model of good clinical practice.\n\n\nRESULTS\nNinety-one per cent of the nursing home staff described ethical problems as a burden. Ninety per cent experienced ethical problems in their daily work. The top three ethical challenges reported by the nursing home staff were as follows: lack of resources (79%), end-of-life issues (39%) and coercion (33%). To improve systematic ethics work, most employees suggested ethics education (86%) and time for ethics discussion (82%). Of 33 documented ethics meetings from Austria during a 1-year period, 29 were prospective resident ethics meetings where decisions for a resident had to be made. Agreement about a solution was reached in all 29 cases, and this consensus was put into practice in all cases. Residents did not participate in the meetings, while relatives participated in a majority of case discussions. In many cases, the main topic was end-of-life care and life-prolonging treatment.\n\n\nCONCLUSIONS\nLack of resources, end-of-life issues and coercion were ethical challenges most often reported by nursing home staff. The staff would appreciate systematic ethics work to aid decision-making. Resident ethics meetings can help to reach consensus in decision-making for nursing home patients. In the future, residents' participation should be encouraged whenever possible.",
"title": ""
},
{
"docid": "ce34bb39b5048f80e849ddf7a476d89d",
"text": "We propose a method to find the community structure in complex networks based on an extremal optimization of the value of modularity. The method outperforms the optimal modularity found by the existing algorithms in the literature giving a better understanding of the community structure. We present the results of the algorithm for computer-simulated and real networks and compare them with other approaches. The efficiency and accuracy of the method make it feasible to be used for the accurate identification of community structure in large complex networks.",
"title": ""
},
{
"docid": "30be7442145c20c523bf9adcf698a677",
"text": "In this letter, a novel design technique to realize planar self-diplexing slot antenna using substrate integrated waveguide (SIW) technology is presented. The proposed antenna uses a bowtie-shaped slot backed by SIW cavity, which is excited by two separate feedlines to resonate at two different frequencies in X-band (8-12 GHz). By properly optimizing the antenna dimensions, a high isolation of better than 25 dB between two input ports is achieved, which helps to introduce self-diplexing phenomenon in the proposed design. The behavior of the individual cavity modes at two resonant frequencies is explained using half-mode theory. The proposed antenna resonates at 9 and 11.2 GHz with unidirectional radiation pattern and a high gain of 4.3 and 4.2 dBi, respectively.",
"title": ""
},
{
"docid": "ae6a58cba46ebb4b19a4701acd08a902",
"text": "Despite the fact that JSON is currently one of the most popular formats for exchanging data on the Web, there are very few studies on this topic and there is no agreement upon a theoretical framework for dealing with JSON. Therefore in this paper we propose a formal data model for JSON documents and, based on the common features present in available systems using JSON, we define a lightweight query language allowing us to navigate through JSON documents. We also introduce a logic capturing the schema proposal for JSON and study the complexity of basic computational tasks associated with these two formalisms.",
"title": ""
},
{
"docid": "186f2950bd4ce621eb0696c2fd09a468",
"text": "In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a disentangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experiments compared the disentangled VAE with both a standard (entangled) VAE and a vanilla supervised model. Results show that the disentangled VAE significantly outperforms the other two models when the proportion of labelled data is artificially reduced, while it loses this advantage when the amount of labelled data increases, and instead matches the performance of the other models. These results suggest that the disentangled VAE may be useful in situations where labelled data is scarce but unlabelled data is abundant.",
"title": ""
},
{
"docid": "aaba4377acbd22cbc52681d4d15bf9af",
"text": "This paper presents a new human body communication (HBC) technique that employs magnetic resonance for data transfer in wireless body-area networks (BANs). Unlike electric field HBC (eHBC) links, which do not necessarily travel well through many biological tissues, the proposed magnetic HBC (mHBC) link easily travels through tissue, offering significantly reduced path loss and, as a result, reduced transceiver power consumption. In this paper the proposed mHBC concept is validated via finite element method simulations and measurements. It is demonstrated that path loss across the body under various postures varies from 10-20 dB, which is significantly lower than alternative BAN techniques.",
"title": ""
},
{
"docid": "b7a9e7afa7167fe9a22105bd88a8102d",
"text": "Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.",
"title": ""
},
{
"docid": "e18151d3d45015fcd946d6a516999e62",
"text": "Knowledge graphs have become a fundamental asset for search engines. A fair amount of user queries seek information on problem-solving tasks such as building a fence or repairing a bicycle. However, knowledge graphs completely lack this kind of how-to knowledge. This paper presents a method for automatically constructing a formal knowledge base on tasks and task-solving steps, by tapping the contents of online communities such as WikiHow. We employ Open-IE techniques to extract noisy candidates for tasks, steps and the required tools and other items. For cleaning and properly organizing this data, we devise embedding-based clustering techniques. The resulting knowledge base, HowToKB, includes a hierarchical taxonomy of disambiguated tasks, temporal orders of sub-tasks, and attributes for involved items. A comprehensive evaluation of HowToKB shows high accuracy. As an extrinsic use case, we evaluate automatically searching related YouTube videos for HowToKB tasks.",
"title": ""
},
{
"docid": "545b41a21edb2fa08fd6680d3d20afaf",
"text": "SUMMARY This paper demonstrate how Gaussian Markov random fields (conditional autoregressions) can be fast sampled using numerical techniques for sparse matrices. The algorithm is general , surprisingly efficient, and expands easily to various forms for conditional simulation and evaluation of normalisation constants. I demonstrate its use in Markov chain Monte Carlo algorithms for disease mapping, space varying regression model, spatial non-parametrics, hierarchical space-time modelling and Bayesian imaging. Håkon Tjelmeland and Darren J. Wilkinson for stimulating discussions, and Leo Knorr-Held for providing the oral cavity cancer data and the region-map of Germany. on \" Computational and Statistical methods for the analysis of spatial data \" .",
"title": ""
},
{
"docid": "8a20feb22ce8797fa77b5d160919789c",
"text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.",
"title": ""
}
] |
scidocsrr
|
206d6c86f85df1de699f8641b859e665
|
Impact of Online Social Networking on Employees Productivity at Work Place in University of Gondar - A Case Study
|
[
{
"docid": "e0a8035f9e61c78a482f2e237f7422c6",
"text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University",
"title": ""
}
] |
[
{
"docid": "a8d4d352a8958628ee98ccf950e2ef2d",
"text": "This study presents a methodology that will produce a viable fault surrogate. The focus of the effort is on the precise measurement of software development process and product outcomes. Tools and processes for the static measurement of the source code have been installed and made operational in a large embedded software system. Source code measurements have been gathered unobtrusively for each build in the software evolution process. The measurements are synthesized to obtain the fault surrogate. The complexity of sequential builds is compared and a new measure, code churn, is calculated. This paper will demonstrate the effectiveness of code complexity churn by validating it against the testing problem reports.",
"title": ""
},
{
"docid": "d8ce92b054fc425a5db5bf17a62c6308",
"text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.",
"title": ""
},
{
"docid": "36bb2a1f2e8942dead6aa0a4192c7a6c",
"text": "This paper reports the completion of four fundamental fluidic operations considered essential to build digital microfluidic circuits, which can be used for lab-on-a-chip or micro total analysis system ( TAS): 1) creating, 2) transporting, 3) cutting, and 4) merging liquid droplets, all by electrowetting, i.e., controlling the wetting property of the surface through electric potential. The surface used in this report is, more specifically, an electrode covered with dielectrics, hence, called electrowetting-on-dielectric (EWOD). All the fluidic movement is confined between two plates, which we call parallel-plate channel, rather than through closed channels or on open surfaces. While transporting and merging droplets are easily verified, we discover that there exists a design criterion for a given set of materials beyond which the droplet simply cannot be cut by EWOD mechanism. The condition for successful cutting is theoretically analyzed by examining the channel gap, the droplet size and the degree of contact angle change by electrowetting on dielectric (EWOD). A series of experiments is run and verifies the criterion. A smaller channel gap, a larger droplet size and a larger change in the contact angle enhance the necking of the droplet, helping the completion of the cutting process. Creating droplets from a pool of liquid is highly related to cutting, but much more challenging. Although droplets may be created by simply pulling liquid out of a reservoir, the location of cutting is sensitive to initial conditions and turns out unpredictable. This problem of an inconsistent cutting location is overcome by introducing side electrodes, which pull the liquid perpendicularly to the main fluid path before activating the cutting. All four operations are carried out in air environment at 25 Vdc applied voltage. [862]",
"title": ""
},
{
"docid": "a197c76b06a56bc3d2e0b146434df80d",
"text": "In humans, both aging and GH deficiency are associated with reduced protein synthesis, decreased lean body and bone mass, and increased percent body fat. In healthy individuals, spontaneous and stimulated GH secretion, as well as circulating IGF-I and IGFBP-3 levels, are significantly decreased with advancing age. The extent to which these age-related changes in GH and IGF-I contribute to alterations in body composition and function remains to be elucidated. GH treatment of GH-deficient adults or old men with reduced IGF-I levels with exogenous GH increases plasma IGF-I, nitrogen retention, and lean body mass, decreases percent body fat, and exerts little effect on bone mineral density. Short-term adverse effects of GH therapy have been minimized by using low-dose regimens, but it is still uncertain whether long-term GH supplementation in adult life increases the risk of metabolic abnormalities or malignancy. Administration of GHRH, which has been shown to maintain the pattern of pulsatile GH secretion in old men, may represent another possible physiological approach to therapy. It may be justifiable initially to limit use of GH to certain elderly patients such as those suffering from catabolic illnesses, malnourishment, burns, cachexia, etc. A great deal more research will be necessary to determine whether normalization of GH and IGF-I levels in healthy older persons will lead to improvements in their physical and psychological functional capacity and quality of life.",
"title": ""
},
{
"docid": "cdef5f6a50c1f427e8f37be3c6ebbccf",
"text": "In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented.",
"title": ""
},
{
"docid": "385c7c16af40ae13b965938ac3bce34c",
"text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "86502e1c68f309bb7676d5b1e9013172",
"text": "In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.",
"title": ""
},
{
"docid": "80a205a05f72a2fa9e7d4e1e310f0787",
"text": "The emerging wireless charging technology is a promising alternative to address the power constraint problem in sensor networks. Comparing to existing approaches, this technology can replenish energy in a more controllable manner and does not require accurate location of or physical alignment to sensor nodes. However, little work has been reported on designing and implementing a wireless charging system for sensor networks. In this paper, we design such a system, build a proof-of-concept prototype, conduct experiments on the prototype to evaluate its feasibility and performance in small-scale networks, and conduct extensive simulations to study its performance in large-scale networks. Experimental and simulation results demonstrate that the proposed system can utilize the wireless charging technology effectively to prolong the network lifetime through delivering energy by a robot to where it is needed. The effects of various configuration and design parameters have also been studied, which may serve as useful guidelines in actual deployment of the proposed system in practice.",
"title": ""
},
{
"docid": "09ca86552eede0fe8a62382978043b8a",
"text": "Analyzing the spreading patterns of memes with respect to their topic distributions and the underlying diffusion network structures is an important task in social network analysis. This task in many cases becomes very challenging since the underlying diffusion networks are often hidden, and the topic specific transmission rates are unknown either. In this paper, we propose a continuous time model, TOPICCASCADE, for topicsensitive information diffusion networks, and infer the hidden diffusion networks and the topic dependent transmission rates from the observed time stamps and contents of cascades. One attractive property of the model is that its parameters can be estimated via a convex optimization which we solve with an efficient proximal gradient based block coordinate descent (BCD) algorithm. In both synthetic and real-world data, we show that our method significantly improves over the previous state-of-the-art models in terms of both recovering the hidden diffusion networks and predicting the transmission times of memes.",
"title": ""
},
{
"docid": "73c8978b793d7904264f0e78d9efdc61",
"text": "The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.",
"title": ""
},
{
"docid": "62d86051d5f3f53f59547a98632c1e5c",
"text": "Infantile hemangiomas are the most common benign vascular tumors in infancy and childhood. As hemangioma could regress spontaneously, it generally does not require treatment unless proliferation interferes with normal function or gives rise to risk of serious disfigurement and complications unlikely to resolve without treatment. Various methods for treating infant hemangiomas have been documented, including wait and see policy, laser therapy, drug therapy, sclerotherapy, radiotherapy, surgery and so on, but none of these therapies can be used for all hemangiomas. To obtain the best treatment outcomes, the treatment protocol should be individualized and comprehensive as well as sequential. Based on published literature and clinical experiences, we established a treatment guideline in order to provide criteria for the management of head and neck hemangiomas. This protocol will be renewed and updated to include and reflect any cutting-edge medical knowledge, and provide the newest treatment modalities which will benefit our patients.",
"title": ""
},
{
"docid": "83b79fc95e90a303f29a44ef8730a93f",
"text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.",
"title": ""
},
{
"docid": "6ced60cadf69a3cd73bcfd6a3eb7705e",
"text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "8aae828a75eb83192e7ac9850f70e7ff",
"text": "Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.",
"title": ""
},
{
"docid": "a37cf5db8b48ba78d75d9c1a84803f14",
"text": "A case of diprosopus in a foal is described. This is only the second report of such a deformity in the equine species. Hereditary pathology and pathogenesis are discussed.",
"title": ""
},
{
"docid": "d8e9194a853e65926e199e3e71ac7467",
"text": "OBJECTIVE\nTo determine short-term results and complications of prepubertal gonadectomy in cats and dogs.\n\n\nDESIGN\nProspective randomized study.\n\n\nANIMALS\n775 cats and 1,213 dogs.\n\n\nPROCEDURE\nAnimals undergoing gonadectomy were allotted into 3 groups on the basis of estimated age (group 1, < 12 weeks old; group 2, 12 to 23 weeks old; group 3, > or = 24 weeks old). Complications during anesthesia, surgery, and the immediate postoperative period (7 days) were recorded. Complications were classified as major (required treatment and resulted in an increase in morbidity or mortality) or minor (required little or no treatment and caused a minimal increase in morbidity). An ANOVA was used to detect differences among groups in age, weight, body temperature, and duration of surgery. To detect differences in complication rates among groups, chi 2 analysis was used.\n\n\nRESULTS\nGroup 1 consisted of 723 animals, group 2 consisted of 532, and group 3 consisted of 733. Group-3 animals had a significantly higher overall complication rate (10.8%) than group-1 animals (6.5%), but did not differ from group-2 animals (8.8%). Differences were not detected among the 3 groups regarding major complications (2.9, 3.2, and 3.0% for groups 1, 2, and 3, respectively), but group-3 animals had significantly more minor complications (7.8%) than group-1 animals (3.6%), but not group-2 animals (5.6%).\n\n\nCLINICAL IMPLICATIONS\nIn this study, prepubertal gonadectomy did not increase morbidity or mortality on a short-term basis, compared with gonadectomy performed on animals at the traditional age. These procedures may be performed safely in prepubertal animals, provided that appropriate attention is given to anesthetic and surgical techniques.",
"title": ""
},
{
"docid": "b513d1cbf3b2f649afcea4d0ab6784ac",
"text": "RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.",
"title": ""
},
{
"docid": "a2a633c972cb84d9b7d27e347bb59cfa",
"text": "This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer’s disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17 AD patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis.",
"title": ""
},
{
"docid": "eb59f239621dde59a13854c5e6fa9f54",
"text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: Pierre.Dupont@uclouvain.be Phone: +32 10 47 91 14 Fax: +32 10 45 03 45",
"title": ""
}
] |
scidocsrr
|
d672d48431f917b27d22a703e7b62e6a
|
Fast recommendation on latent collaborative relations
|
[
{
"docid": "9e45bc3ac789fd1343e4e400b7f0218e",
"text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.",
"title": ""
},
{
"docid": "e26c73004a3f29b1abbadd515a0ca748",
"text": "The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods.\n We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.",
"title": ""
}
] |
[
{
"docid": "36ed684e39877873407efb809f3cd1dc",
"text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.",
"title": ""
},
{
"docid": "e34ad4339934d9b9b4019fad37f8dd4e",
"text": "This paper presents a technique for estimating the threedimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.",
"title": ""
},
{
"docid": "4f15ef7dc7405f22e1ca7ae24154f5ef",
"text": "This position paper addresses current debates about data in general, and big data specifically, by examining the ethical issues arising from advances in knowledge production. Typically ethical issues such as privacy and data protection are discussed in the context of regulatory and policy debates. Here we argue that this overlooks a larger picture whereby human autonomy is undermined by the growth of scientific knowledge. To make this argument, we first offer definitions of data and big data, and then examine why the uses of data-driven analyses of human behaviour in particular have recently experienced rapid growth. Next, we distinguish between the contexts in which big data research is used, and argue that this research has quite different implications in the context of scientific as opposed to applied research. We conclude by pointing to the fact that big data analyses are both enabled and constrained by the nature of data sources available. Big data research will nevertheless inevitably become more pervasive, and this will require more awareness on the part of data scientists, policymakers and a wider public about its contexts and often unintended consequences.",
"title": ""
},
{
"docid": "bb8fe4145e1ea2337f5cc1a18a9a348f",
"text": "Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector. The Convolutional Neural Networks (CNNs) are trained and finetuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset1 dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.The UFPR-ALPR dataset is publicly available to the research community at https://web.inf.ufpr.br/vri/databases/ufpr-alpr/ subject to privacy restrictions.",
"title": ""
},
{
"docid": "3b27bb001a7d897a33f50b504508686d",
"text": "In this age, in this nation, public sentiment is everything. With it, nothing can fail; against it, nothing can succeed. Whoever molds public sentiment goes deeper than he who enacts statutes, or pronounces judicial decisions (Abraham Lincoln, 1858 ) [1]. It is apparent from President Lincoln's well known quote that legislators understood the force of open assumption quite a while prior. In today world, the Internet is the main source of information. An enormous amount of information and opinion online is scattered and unstructured with no machine to arrange it. Because of demand the public to know opinions about exact product and services, political issues, or social scientists. That’s led us to study of field Opining Mining and Sentiment Analysis. Opining Mining and Sentiment Analysis have recently played a significant role for researchers because analysis of online text is beneficial for the market research political issue, business intelligence, online shopping, and scientific survey from psychological. Sentiment Analysis identifies the polarity of extracted public opinions. This paper presents a survey which covers Opining Mining, Sentiment Analysis, techniques, tools and classification.",
"title": ""
},
{
"docid": "6ed1132aa216e15fe54e8524c9a4f8ee",
"text": "CONTEXT\nWith ageing populations, the prevalence of dementia, especially Alzheimer's disease, is set to soar. Alzheimer's disease is associated with progressive cerebral atrophy, which can be seen on MRI with high resolution. Longitudinal MRI could track disease progression and detect neurodegenerative diseases earlier to allow prompt and specific treatment. Such use of MRI requires accurate understanding of how brain changes in normal ageing differ from those in dementia.\n\n\nSTARTING POINT\nRecently, Henry Rusinek and colleagues, in a 6-year longitudinal MRI study of initially healthy elderly subjects, showed that an increased rate of atrophy in the medial temporal lobe predicted future cognitive decline with a specificity of 91% and sensitivity of 89% (Radiology 2003; 229: 691-96). WHERE NEXT? As understanding of neurodegenerative diseases increases, specific disease-modifying treatments might become available. Serial MRI could help to determine the efficacy of such treatments, which would be expected to slow the rate of atrophy towards that of normal ageing, and might also detect the onset of neurodegeneration. The amount and pattern of excess atrophy might help to predict the underlying pathological process, allowing specific therapies to be started. As the precision of imaging improves, the ability to distinguish healthy ageing from degenerative dementia should improve.",
"title": ""
},
{
"docid": "18851774e598f4cb66dbc770abe4a83f",
"text": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"title": ""
},
{
"docid": "7674e4dc2e5166f21543e3b0e79cec62",
"text": "Software toolkits play an essential role in information retrieval research. Most open-source toolkits developed by academics are designed to facilitate the evaluation of retrieval models over standard test collections. Efforts are generally directed toward better ranking and less attention is usually given to scalability and other operational considerations. On the other hand, Lucene has become the de facto platform in industry for building search applications (outside a small number of companies that deploy custom infrastructure). Compared to academic IR toolkits, Lucene can handle heterogeneous web collections at scale, but lacks systematic support for evaluation over standard test collections. This paper introduces Anserini, a new information retrieval toolkit that aims to provide the best of both worlds, to better align information retrieval practice and research. Anserini provides wrappers and extensions on top of core Lucene libraries that allow researchers to use more intuitive APIs to accomplish common research tasks. Our initial efforts have focused on three functionalities: scalable, multi-threaded inverted indexing to handle modern web-scale collections, streamlined IR evaluation for ad hoc retrieval on standard test collections, and an extensible architecture for multi-stage ranking. Anserini ships with support for many TREC test collections, providing a convenient way to replicate competitive baselines right out of the box. Experiments verify that our system is both efficient and effective, providing a solid foundation to support future research.",
"title": ""
},
{
"docid": "4d3de2d03431e8f06a5b8b31a784ecaa",
"text": "For medical students, virtual patient dialogue systems can provide useful training opportunities without the cost of employing actors to portray standardized patients. This work utilizes wordand character-based convolutional neural networks (CNNs) for question identification in a virtual patient dialogue system, outperforming a strong wordand characterbased logistic regression baseline. While the CNNs perform well given sufficient training data, the best system performance is ultimately achieved by combining CNNs with a hand-crafted pattern matching system that is robust to label sparsity, providing a 10% boost in system accuracy and an error reduction of 47% as compared to the pattern-matching system alone.",
"title": ""
},
{
"docid": "a28567e108f00e3b251882404f2574b2",
"text": "Sirs: A 46-year-old woman was referred to our hospital because of suspected cerebral ischemia. Two days earlier the patient had recognized a left-sided weakness and clumsiness. On neurological examination we found a mild left-sided hemiparesis and hemiataxia. There was a generalized shrinking violaceous netlike pattering of the skin especially on both legs and arms but also on the trunk and buttocks (Fig. 1). The patient reported the skin changing to be more prominent on cold exposure. The patient’s family remembered this skin finding to be evident since the age of five years. A diagnosis of livedo racemosa had been made 5 years ago. The neuropsychological assessment of this highly educated civil servant revealed a slight cognitive decline. MRI showed a right-sided cerebral ischemia in the middle cerebral artery (MCA) territory. Her medical history was significant for migraine-like headache for many years, a miscarriage 18 years before and a deep vein thrombosis of the left leg six years ago. She had no history of smoking or other cerebrovascular risk factors including no estrogen-containing oral contraceptives. The patient underwent intensive examinations including duplex sonography of extraand intracranial arteries, transesophageal echocardiography, 24-h ECG, 24-h blood pressure monitoring, multimodal evoked potentials, electroencephalography, lumbar puncture and sonography of abdomen. All these tests were negative. Extensive laboratory examinations revealed a heterozygote prothrombin 20210 mutation, which is associated with a slightly increased risk for thrombosis. Antiphospholipid antibodies (aplAB) and other laboratory examinations to exclude vasculitis, toxic metabolic disturbances and other causes for livedo racemosa were negative. Skin biopsy showed vasculopathy with intimal proliferation and an occluding thrombus. The patient was diagnosed as having antiphospholipid-antibodynegative Sneddon’s syndrome (SS) based on cerebral ischemia combined with wide-spread livedo racemosa associated with a history of miscarriage, deep vein thrombosis, migraine like headaches and mild cognitive decline. We started long-term prophylactic pharmacological therapy with captopril as a myocyte proliferation agent and with aspirin as an antiplatelet therapy. Furthermore we recommended thrombosis prophylaxis in case of immobilization. One month later the patient experienced vein thrombosis of her right forearm and suffered from dyspnea. Antiphospholipid antibody testing again was negative. EBT and CT of thorax showed an aneurysmatic dilatation of aorta ascendens up to 4.5 cm. After careful consideration of the possible disadvantages we nevertheless decided to start long-term anticoagulation instead of antiplatelet therapy because of the second thrombotic event. The elucidating and interesting issue of this case is the association of miscarriage and two vein thromboses in aplAB-negative SS. Little is known about this phenomenon and there are only a few reports about these symptoms in aplABLETTER TO THE EDITORS",
"title": ""
},
{
"docid": "414146e7b275da7056d6f59e5bb35112",
"text": "Authentication and authorization are critical security layers to protect a wide range of online systems, services and content. However, the increased prevalence of wearable and mobile devices, the expectations of a frictionless experience and the diverse user environments will challenge the way users are authenticated. Consumers demand secure and privacy-aware access from any device, whenever and wherever they are, without any obstacles. This paper reviews emerging trends and challenges with frictionless authentication systems and identifies opportunities for further research related to the enrollment of users, the usability of authentication schemes, as well as security and privacy tradeoffs of mobile and wearable continuous authentication systems. Keywords–Frictionless authentication; Behaviometrics; Security; Privacy; Usability.",
"title": ""
},
{
"docid": "c5129d0acd299dcefb3be08caf7ef0b9",
"text": "Automatically detecting human social intentions and attitudes from spoken conversation is an important task for speech processing nd social computing. We describe a system for detecting interpersonal stance: whether a speaker is flirtatious, friendly, awkward, or ssertive. We make use of a new spoken corpus of over 1000 4-min speed-dates. Participants rated themselves and their interlocutors or these interpersonal stances, allowing us to build detectors for style both as interpreted by the speaker and as perceived by the earer. We use lexical, prosodic, and dialog features in an SVM classifier to detect very clear styles (the strongest 10% in each stance) ith up to 75% accuracy on previously seen speakers (50% baseline) and up to 59% accuracy on new speakers (48% baseline). feature analysis suggests that flirtation is marked by joint focus on the woman as a target of the conversation, awkwardness by ecreased speaker involvement, and friendliness by a conversational style including other-directed laughter and appreciations. Our ork has implications for our understanding of interpersonal stance, their linguistic expression, and their automatic extraction. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7f4196ab2c58feaa1ecac18c0c572446",
"text": "We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. \"L\"-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet [15], but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general Manhattan layouts.",
"title": ""
},
{
"docid": "3eea5fa01ddd5bef75de7d0a4184bd30",
"text": "Monodisperse samples of silver nanocubes were synthesized in large quantities by reducing silver nitrate with ethylene glycol in the presence of poly(vinyl pyrrolidone) (PVP). These cubes were single crystals and were characterized by a slightly truncated shape bounded by [100], [110], and [111] facets. The presence of PVP and its molar ratio (in terms of repeating unit) relative to silver nitrate both played important roles in determining the geometric shape and size of the product. The silver cubes could serve as sacrificial templates to generate single-crystalline nanoboxes of gold: hollow polyhedra bounded by six [100] and eight [111] facets. Controlling the size, shape, and structure of metal nanoparticles is technologically important because of the strong correlation between these parameters and optical, electrical, and catalytic properties.",
"title": ""
},
{
"docid": "e96cc82ca99adb611b4f11a98dc963fd",
"text": "With the routine use of electronic health records (EHRs) in hospitals, health systems, and physician practices, there has been rapid growth in the availability of health care data over the last decade. In addition to the structured data in EHRs, new methods such as natural language processing can derive meaning from unstructured data, permitting the capture of substantial clinical information embedded in clinical notes. Furthermore, the growth in the availability of registries and claims data and the linkages between all these data sources have created a big data platform in health care, vast in both size and scope. Concurrently, new computational machine learning approaches promise ever-more-accurate prediction. The marvel of Google and of Watson, the inexorability of Moore’s law (ie, computing power doubles every 2 years for the same cost), suggest a future in which medicine will be transformed into an information science, and each clinical decision may be optimized based on a forecasting of outcomes under alternative treatment options, beyond the knowledge and understanding of the individual physician. Yet despite these innovations and those to come, quantitative risk prediction in medicine has been available for several decades, based on more classical",
"title": ""
},
{
"docid": "ecc4f1d5fb66b816daa9ae514bd58b45",
"text": "In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.",
"title": ""
},
{
"docid": "d15e7e655e7afc86e30e977516de7720",
"text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"title": ""
},
{
"docid": "87748c1fc9dc379c2225c92d2218e278",
"text": "If components (denoted by horizontal and vertical axis in Figure 2a) are correlated, then samples (points in Figure 2a) are in a non-spherical shape, then eigenvalues are mutually different. Hence correlation leads to non-uniformity of eigenvalues. Since the eigenvectors are orthogonal by design, it suffices to focus on eigenvalues only. To reduce correlation, we encourage the eigenvalues to be uniform (Figure 2b). Rotation does not affect eigenvalues or uncorrelation. For a component matrix A and rotation matrix R, A>A equals to A>R>RA and they have the same eigendecomposition (say UEU>). Ensuring the eigenvalue matrix E is close to identity implies the latent components are rotations of the orthonormal (and hence uncorrelated) eigenvectors.",
"title": ""
},
{
"docid": "4fec66381a581c310921be16077e049e",
"text": "Saliva is increasingly recognised as an attractive diagnostic fluid. The presence of various disease signalling salivary biomarkers that accurately reflect normal and disease states in humans and the sampling benefits compared to blood sampling are some of the reasons for this recognition. This explains the burgeoning research field in assay developments and technological advancements for the detection of various salivary biomarkers to improve clinical diagnosis, management, and treatment. This paper reviews the significance of salivary biomarkers for clinical diagnosis and therapeutic applications, with focus on the technologies and biosensing platforms that have been reported for screening these biomarkers.",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
}
] |
scidocsrr
|
c24c221108ccd8a800085c817325004b
|
A Strategy for an Uncompromising Incremental Learner
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "f4bc0b7aa15de139ddb09e406fc1ce0b",
"text": "This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple “recency” based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular “sweep rehearsal” is very successful at minimising catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, “pseudorehearsal”, a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then suggest an interpretation of these rehearsal mechanisms in the context of a function approximation based account of neural network learning. Both rehearsal and pseudorehearsal may have practical applications, allowing new information to be integrated into an existing network with minimum disruption of old information.",
"title": ""
}
] |
[
{
"docid": "aa7114bf0038f2ab4df6908ed7d28813",
"text": "Sematch is an integrated framework for the development, evaluation and application of semantic similarity for Knowledge Graphs. The framework provides a number of similarity tools and datasets, and allows users to compute semantic similarity scores of concepts, words, and entities, as well as to interact with Knowledge Graphs through SPARQL queries. Sematch focuses on knowledge-based semantic similarity that relies on structural knowledge in a given taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents. Researchers can use Sematch to develop and evaluate semantic similarity metrics and exploit these metrics in applications. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dd5073ad01ebf1cfff678d3689c37567",
"text": "Personalized top-N recommendation systems have great impact on many real world applications such as E-commerce platforms and social networks. Most existing methods produce personalized topN recommendations by minimizing a specific uniform loss such as pairwise ranking loss or pointwise recovery loss. In this paper, we propose a novel personalized top-N recommendation approach that minimizes a combined heterogeneous loss based on linear self-recovery models. The heterogeneous loss integrates the strengths of both pairwise ranking loss and pointwise recovery loss to provide more informative recommendation predictions. We formulate the learning problem with heterogeneous loss as a constrained convex minimization problem and develop a projected stochastic gradient descent optimization algorithm to solve it. We evaluate the proposed approach on a set of personalized top-N recommendation tasks. The experimental results show the proposed approach outperforms a number of state-of-the-art methods on top-N recommendation.",
"title": ""
},
{
"docid": "2f3734b49e9d2e6ea7898622dac8a296",
"text": "Dropout prediction in MOOCs is a well-researched problem where we classify which students are likely to persist or drop out of a course. Most research into creating models which can predict outcomes is based on student engagement data. Why these students might be dropping out has only been studied through retroactive exit surveys. This helps identify an important extension area to dropout prediction— how can we interpret dropout predictions at the student and model level? We demonstrate how existing MOOC dropout prediction pipelines can be made interpretable, all while having predictive performance close to existing techniques. We explore each stage of the pipeline as design components in the context of interpretability. Our end result is a layer which longitudinally interprets both predictions and entire classification models of MOOC dropout to provide researchers with in-depth insights of why a student is likely to dropout.",
"title": ""
},
{
"docid": "783a68d5946c1b1a6087ee2c58f1db5b",
"text": "Test-driven development is a discipline of design and programming where every line of new code is written in response to a test the programmer writes just before coding. This special issue of IEEE Software includes seven feature articles on various aspects of TDD and a Point/Counterpoint debate on the use of mock objects in applying it. The articles demonstrate the ways TDD is being used in nontrivial situations (database development, embedded software development, GUI development, performance tuning), signifying an adoption level for the practice beyond the visionary phase and into the early mainstream. In this introduction to the special issue on TDD, the guest editors also summarize selected TDD empirical studies from industry and academia.",
"title": ""
},
{
"docid": "e5500cfa74f231b0ae4ce1c56d59568c",
"text": "We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.",
"title": ""
},
{
"docid": "febc387da7c4ee2c576393d54a0c142e",
"text": "Sensors measure physical quantities of the environment for sensing and actuation systems, and are widely used in many commercial embedded systems such as smart devices, drones, and medical devices because they offer convenience and accuracy. As many sensing and actuation systems depend entirely on data from sensors, these systems are naturally vulnerable to sensor spoofing attacks that use fabricated physical stimuli. As a result, the systems become entirely insecure and unsafe. In this paper, we propose a new type of sensor spoofing attack based on saturation. A sensor shows a linear characteristic between its input physical stimuli and output sensor values in a typical operating region. However, if the input exceeds the upper bound of the operating region, the output is saturated and does not change as much as the corresponding changes of the input. Using saturation, our attack can make a sensor to ignore legitimate inputs. To demonstrate our sensor spoofing attack, we target two medical infusion pumps equipped with infrared (IR) drop sensors to control precisely the amount of medicine injected into a patients’ body. Our experiments based on analyses of the drop sensors show that the output of them could be manipulated by saturating the sensors using an additional IR source. In addition, by analyzing the infusion pumps’ firmware, we figure out the vulnerability in the mechanism handling the output of the drop sensors, and implement a sensor spoofing attack that can bypass the alarm systems of the targets. As a result, we show that both over-infusion and under-infusion are possible: our spoofing attack can inject up to 3.33 times the intended amount of fluid or 0.65 times of it for a 10 minute period.",
"title": ""
},
{
"docid": "fb173d15e079fcdf0cc222f558713f9c",
"text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.",
"title": ""
},
{
"docid": "0f85ce6afd09646ee1b5242a4d6122d1",
"text": "Environmental concern has resulted in a renewed interest in bio-based materials. Among them, plant fibers are perceived as an environmentally friendly substitute to glass fibers for the reinforcement of composites, particularly in automotive engineering. Due to their wide availability, low cost, low density, high-specific mechanical properties, and eco-friendly image, they are increasingly being employed as reinforcements in polymer matrix composites. Indeed, their complex microstructure as a composite material makes plant fiber a really interesting and challenging subject to study. Research subjects about such fibers are abundant because there are always some issues to prevent their use at large scale (poor adhesion, variability, low thermal resistance, hydrophilic behavior). The choice of natural fibers rather than glass fibers as filler yields a change of the final properties of the composite. One of the most relevant differences between the two kinds of fiber is their response to humidity. Actually, glass fibers are considered as hydrophobic whereas plant fibers have a pronounced hydrophilic behavior. Composite materials are often submitted to variable climatic conditions during their lifetime, including unsteady hygroscopic conditions. However, in humid conditions, strong hydrophilic behavior of such reinforcing fibers leads to high level of moisture absorption in wet environments. This results in the structural modification of the fibers and an evolution of their mechanical properties together with the composites in which they are fitted in. Thereby, the understanding of these moisture absorption mechanisms as well as the influence of water on the final properties of these fibers and their composites is of great interest to get a better control of such new biomaterials. This is the topic of this review paper.",
"title": ""
},
{
"docid": "5bfc5768cf41643a870e3f3dddbbd741",
"text": "Homomorphic encryption has progressed rapidly in both efficiency and versatility since its emergence in 2009. Meanwhile, a multitude of pressing privacy needs — ranging from cloud computing to healthcare management to the handling of shared databases such as those containing genomics data — call for immediate solutions that apply fully homomorpic encryption (FHE) and somewhat homomorphic encryption (SHE) technologies. Further progress towards these ends requires new ideas for the efficient implementation of algebraic operations on word-based (as opposed to bit-wise) encrypted data. Whereas handling data encrypted at the bit level leads to prohibitively slow algorithms for the arithmetic operations that are essential for cloud computing, the word-based approach hits its bottleneck when operations such as integer comparison are needed. In this work, we tackle this challenging problem, proposing solutions to problems — including comparison and division — in word-based encryption via a leveled FHE scheme. We present concrete performance figures for all proposed primitives.",
"title": ""
},
{
"docid": "0ab4f0cf03c0a2d72b4e9ed079181a67",
"text": "In this paper, we present a method for estimating articulated human poses in videos. We cast this as an optimization problem defined on body parts with spatio-temporal links between them. The resulting formulation is unfortunately intractable and previous approaches only provide approximate solutions. Although such methods perform well on certain body parts, e.g., head, their performance on lower arms, i.e., elbows and wrists, remains poor. We present a new approximate scheme with two steps dedicated to pose estimation. First, our approach takes into account temporal links with subsequent frames for the less-certain parts, namely elbows and wrists. Second, our method decomposes poses into limbs, generates limb sequences across time, and recomposes poses by mixing these body part sequences. We introduce a new dataset \"Poses in the Wild\", which is more challenging than the existing ones, with sequences containing background clutter, occlusions, and severe camera motion. We experimentally compare our method with recent approaches on this new dataset as well as on two other benchmark datasets, and show significant improvement.",
"title": ""
},
{
"docid": "68ad03bca3696f1163ba1d09ae1115e0",
"text": "Manually labeling datasets with object masks is extremely time consuming. In this work, we follow the idea of Polygon-RNN [4] to produce polygonal annotations of objects interactively using humans-in-the-loop. We introduce several important improvements to the model: 1) we design a new CNN encoder architecture, 2) show how to effectively train the model with Reinforcement Learning, and 3) significantly increase the output resolution using a Graph Neural Network, allowing the model to accurately annotate high-resolution objects in images. Extensive evaluation on the Cityscapes dataset [8] shows that our model, which we refer to as Polygon-RNN++, significantly outperforms the original model in both automatic (10% absolute and 16% relative improvement in mean IoU) and interactive modes (requiring 50% fewer clicks by annotators). We further analyze the cross-domain scenario in which our model is trained on one dataset, and used out of the box on datasets from varying domains. The results show that Polygon-RNN++ exhibits powerful generalization capabilities, achieving significant improvements over existing pixel-wise methods. Using simple online fine-tuning we further achieve a high reduction in annotation time for new datasets, moving a step closer towards an interactive annotation tool to be used in practice.",
"title": ""
},
{
"docid": "3bc11fc80cffedb28465b506e3cd17d4",
"text": "The construction industry has been facing a paradigm shift to (i) increase productivity, efficiency, infrastructure value; quality and sustainability (ii) reduce lifecycle costs, lead times and duplications via effective collaboration and communication of stakeholders in construction projects. This paradigm shift is becoming more critical with remote construction projects, which reveals unique and even more complicated challenging problems in relation to communication and management due to the remoteness of the construction sites. On the other hand, Building Informational Modelling (BIM) is offered by some as the panacea to addressing the interdisciplinary inefficiencies in construction projects. Although in many cases the adoption of BIM has numerous potential benefits, it also raises interesting challenges with regards to how BIM integrates the business processes of individual practices. This paper aims to show how BIM adoption for an architectural company helps to mitigate the management and communication problems in remote construction project. The paper adopts a case study methodology, which is a UK Knowledge Transfer Partnership (KTP) project of BIM adoption between the University of Salford, UK and John McCall Architects (JMA), in which the BIM use between the architectural company and the main contractor for a remote construction project is elaborated and justified. Research showed that the key management and communication problems such as poor quality of construction works, unavailability of materials, and ineffective planning and scheduling can largely be mitigated by adopting BIM at the design stage.",
"title": ""
},
{
"docid": "211cf327b65cbd89cf635bbeb5fa9552",
"text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.",
"title": ""
},
{
"docid": "a583bbf2deac0bf99e2790c47598cddd",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "08775ff321ca341d84abca1a1fac7abd",
"text": "Preterm infants often have difficulties in learning how to suckle from the breast or how to drink from a bottle. As yet, it is unclear whether this is part of their prematurity or whether it is caused by neurological problems. Is it possible to decide on the basis of how an infant learns to suckle or drink whether it needs help and if so, what kind of help? In addition, can any predictions be made regarding the relationship between these difficulties and later neurodevelopmental outcome? We searched the literature for recent insights into the development of sucking and the factors that play a role in acquiring this skill. Our aim was to find a diagnostic tool that focuses on the readiness for feeding or that provides guidelines for interventions. At the same time, we searched for studies on the relationship between early sucking behavior and developmental outcome. It appeared that there is a great need for a reliable, user-friendly and noninvasive diagnostic tool to study sucking in preterm and full-term infants.",
"title": ""
},
{
"docid": "10124ea154b8704c3a6aaec7543ded57",
"text": "Tomato bacterial wilt and canker, caused by Clavibacter michiganensis subsp. michiganensis (Cmm) is considered one of the most important bacterial diseases of tomato worldwide. During the last two decades, severe outbreaks have occurred in greenhouses in the horticultural belt of Buenos Aires-La Plata, Argentina. Cmm strains collected in this area over a period of 14 years (2000–2013) were characterized for genetic diversity by rep-PCR genomic fingerprinting and level of virulence in order to have a better understanding of the source of inoculum and virulence variability. Analyses of BOX-, ERIC- and REP-PCR fingerprints revealed that the strains were genetically diverse; the same three fingerprint types were obtained in all three cases. No relationship could be established between rep-PCR clustering and the year, location or greenhouse origin of isolates, which suggests different sources of inoculum. However, in a few cases, bacteria with identical fingerprint types were isolated from the same greenhouse in different years. Despite strains differing in virulence, particularly within BOX-PCR groups, putative virulence genes located in plasmids (celA, pat-1) or in a pathogenicity island in the chromosome (tomA, chpC, chpG and ppaA) were detected in all strains. Our results suggest that new strains introduced every year via seed importation might be coexisting with others persisting locally. This study highlights the importance of preventive measures to manage tomato bacterial wilt and canker.",
"title": ""
},
{
"docid": "b18bb896338bdfddfd0a3e0a0518e8fe",
"text": "Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.",
"title": ""
},
{
"docid": "96607113a8b6d0ca1c043d183420996b",
"text": "Primary retroperitoneal masses include a diverse, and often rare, group of neoplastic and non-neoplastic entities that arise within the retroperitoneum but do not originate from any retroperitoneal organ. Their overlapping appearances on cross-sectional imaging may pose a diagnostic challenge to the radiologist; familiarity with characteristic imaging features, together with relevant clinical information, helps to narrow the differential diagnosis. In this article, a systematic approach to identifying and classifying primary retroperitoneal masses is described. The normal anatomy of the retroperitoneum is reviewed with an emphasis on fascial planes, retroperitoneal compartments, and their contents using cross-sectional imaging. Specific radiologic signs to accurately identify an intra-abdominal mass as primary retroperitoneal are presented, first by confirming the location as retroperitoneal and secondly by excluding an organ of origin. A differential diagnosis based on a predominantly solid or cystic appearance, including neoplastic and non-neoplastic entities, is elaborated. Finally, key diagnostic clues based on characteristic imaging findings are described, which help to narrow the differential diagnosis. This article provides a comprehensive overview of the cross-sectional imaging features of primary retroperitoneal masses, including normal retroperitoneal anatomy, radiologic signs of retroperitoneal masses and the differential diagnosis of solid and cystic, neoplastic and non-neoplastic retroperitoneal masses, with a view to assist the radiologist in narrowing the differential diagnosis.",
"title": ""
},
{
"docid": "74e9377af53582a598b919c2741d04d9",
"text": "PURPOSE\nTo review various types of electroencephalographic activities of the brain and present an overview of brain-computer interface (BCI) systems' history and their applications in rehabilitation.\n\n\nMETHODS\nA scoping review of published English literature on BCI application in the field of rehabilitation was undertaken. IEEE Xplore, ScienceDirect, Google Scholar and Scopus databases were searched since inception up to August 2012. All experimental studies published in English and discussed complete cycle of the BCI process was included in the review.\n\n\nRESULTS AND DISCUSSION\nIn total, 90 articles met the inclusion criteria and were reviewed. Various approaches that improve the accuracy and performance of BCI systems were discussed. Based on BCI's clinical application, reviewed articles were categorized into three groups: motion rehabilitation, speech rehabilitation and virtual reality control (VRC). Almost half of the reviewed papers (48%) concentrated on VRC. Speech rehabilitation and motion rehabilitation made up 33% and 19% of the reviewed papers, respectively. Among different types of electroencephalography signals, P300, steady state visual evoked potentials and motor imagery signals were the most common.\n\n\nCONCLUSIONS\nThis review discussed various applications of BCI in rehabilitation and showed how BCI can be used to improve the quality of life for people with neurological disabilities. It will develop and promote new models of communication and finally, will create an accurate, reliable, online communication between human brain and computer and reduces the negative effects of external stimuli on BCI performance. Implications for Rehabilitation The field of brain-computer interfaces (BCI) is rapidly advancing and it is expected to fulfill a critical role in rehabilitation of neurological disorders and in movement restoration in the forthcoming years. In the near future, BCI has notable potential to become a major tool used by people with disabilities to control locomotion and communicate with surrounding environment and, consequently, improve the quality of life for many affected persons. Electrical field recording at the scalp (i.e. electroencephalography) is the most likely method to be of practical value for clinical use as it is simple and non-invasive. However, some aspects need future improvements, such as the ability to separate close imagery signal (motion of extremities and phalanges that are close together).",
"title": ""
},
{
"docid": "364d57031cf64e2f8d1b6ab84409bc2e",
"text": "The ability to influence behaviour is central to many of the key policy challenges in areas such as health, finance and climate change. The usual route to behaviour change in economics and psychology has been to attempt to ‘change minds’ by influencing the way people think through information and incentives. There is, however, increasing evidence to suggest that ‘changing contexts’ by influencing the environments within which people act (in largely automatic ways) can have important effects on behaviour. We present a mnemonic, MINDSPACE, which gathers up the nine most robust effects that influence our behaviour in mostly automatic (rather than deliberate) ways. This framework is being used by policymakers as an accessible summary of the academic literature. To motivate further research and academic scrutiny, we provide some evidence of the effects in action and highlight some of the significant gaps in our knowledge. 2011 Elsevier B.V. All rights reserved. 0167-4870/$ see front matter 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.joep.2011.10.009 ⇑ Corresponding author. Tel.: +44 (0)2033127259. E-mail addresses: p.h.dolan@lse.ac.uk (P. Dolan), michael.hallsworth@instituteforgovernment.org.uk (M. Hallsworth), david.halpern@cabinet-office.x.gsi.gov.uk (D. Halpern), dominic.king05@imperial.ac.uk (D. King), robert.metcalfe@merton.ox.ac.uk (R. Metcalfe), i.vlaev@imperial.ac.uk (I. Vlaev). Journal of Economic Psychology 33 (2012) 264–277",
"title": ""
}
] |
scidocsrr
|
1a6a53f566e217c777d135c4399c876f
|
Even good bots fight: The case of Wikipedia
|
[
{
"docid": "1f4985ca0e188bfbf9145875cd7acfc5",
"text": "Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.",
"title": ""
}
] |
[
{
"docid": "5945081c099c883d238dca2a1dfc821e",
"text": "Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5 % of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.",
"title": ""
},
{
"docid": "12507d8475f1628bb3ab3dbcfff5682c",
"text": "This paper presents a 3D model-based tracking suitable for indoor position control of an unmanned aerial vehicle (UAV). Given a 3D model of the edges of its environment, the UAV locates itself thanks to a robust multiple hypothesis tracker. The pose estimation is then fused to inertial data to provide the translational velocity required for the control. A hierarchical control is used to achieve positioning tasks. Experiments on a quad-rotor aerial vehicle validate the proposed approach.",
"title": ""
},
{
"docid": "5183794d8bef2d8f2ee4048d75a2bd3c",
"text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"title": ""
},
{
"docid": "26d06b650cffb1bf50d059087b307119",
"text": "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societal and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [Datta et al., 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community.\n The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.",
"title": ""
},
{
"docid": "8165a77b36b7c7dd26e5f8223e2564a7",
"text": "A novel design method of a wideband dual-polarized antenna is presented by using shorted dipoles, integrated baluns, and crossed feed lines. Simulation and equivalent circuit analysis of the antenna are given. To validate the design method, an antenna prototype is designed, optimized, fabricated, and measured. Measured results verify that the proposed antenna has an impedance bandwidth of 74.5% (from 1.69 to 3.7 GHz) for VSWR < 1.5 at both ports, and the isolation between the two ports is over 30 dB. Stable gain of 8–8.7 dBi and half-power beamwidth (HPBW) of 65°–70° are obtained for 2G/3G/4G base station frequency bands (1.7–2.7 GHz). Compared to the other reported dual-polarized dipole antennas, the presented antenna achieves wide impedance bandwidth, high port isolation, stable antenna gain, and HPBW with a simple structure and compact size.",
"title": ""
},
{
"docid": "6c4d6eff1fb7ef03efc3197726545ed8",
"text": "Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di/cult to disguise. Current approaches are mostly statistical and concentrate on walking only. By analysing leg motion we show how we can recognise people not only by the walking gait, but also by the running gait. This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts. These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg, from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means. One approach is completely automated whereas the other requires speci5cation of a single parameter to distinguish between walking and running. Results show that both gaits are potential biometrics, with running being more potent. By its basis in evidence gathering, this new technique can tolerate noise and low resolution. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "68835a12fbb7480c7b797ecc09260c75",
"text": "Spelling correction can assist individuals to input text data with machine using written language to obtain relevant information efficiently and effectively in. By referring to relevant applications such as web search, writing systems, recommend systems, document mining, typos checking before printing is very close to spelling correction. Individuals can input text, keyword, sentence how to interact with an intelligent system according to recommendations of spelling correction. This work presents a novel spelling error detection and correction method based on N-gram ranked inverted index is proposed to achieve this aim, spelling correction. According to the pronunciation and the shape similarity pattern, a dictionary is developed to help detect the possible spelling error detection. The inverted index is used to map the potential spelling error character to the possible corresponding characters either in character or word level. According to the N-gram score, the ranking in the list corresponding to possible character is illustrated. Herein, E-How net is used to be the knowledge representation of tradition Chinese words. The data sets provided by SigHan 7 bakeoff are used to evaluate the proposed method. Experimental results show the proposed methods can achieve accepted performance in subtask one, and outperform other approaches in subtask two.",
"title": ""
},
{
"docid": "a87e78bc603e269c8bdce67715bd3057",
"text": "Linear pulse amplifiers with current mode output are advantageous for driving high voltage capacitive loads like piezoelectric or electrorheological actuators. The new amplifier implements output voltages up to 10 kV with 2 kW peak power and 80 μs rise time. Low quiescent current of 0.6 mA minimizes static power losses. The structure consists of multiple stacked feedback loops in order to reach high transfer linearity and a symmetrical voltage division among the cascaded output power transistors. The linear approach eliminates the inherent ripple of switched mode power amplifiers and improves rise and fall times by at least one order of magnitude.",
"title": ""
},
{
"docid": "5d48cd6c8cc00aec5f7f299c346405c9",
"text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of",
"title": ""
},
{
"docid": "45ef23f40fd4241b58b8cb0810695785",
"text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.",
"title": ""
},
{
"docid": "854eab1455c6d49b67dc9d0f4864409f",
"text": "We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods.",
"title": ""
},
{
"docid": "13d94a3afd97c4c5f8839652c58ab05f",
"text": "We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation. In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection",
"title": ""
},
{
"docid": "e82e44e851486b557948a63366486fef",
"text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.",
"title": ""
},
{
"docid": "e409514050c2c822c489d213c4877da8",
"text": "In this paper, a multiinput dc–dc converter is proposed and studied for hybrid electric vehicles. Compared to conventional works, the output gain is enhanced. Fuel cell (FC), photovoltaic panel, and energy storage system are the input sources for the proposed converter. The FC is considered as the main power supply, and roof-top PV is employed to charge the battery, increase the efficiency, and reduce fuel economy. The converter has the capability of providing the demanded power by load in absence of one or two resources. Moreover, the power management strategy is described and applied in a control method. A prototype of the converter is also implemented and tested to verify the analysis.",
"title": ""
},
{
"docid": "dcf9cba8bf8e2cc3f175e63e235f6b81",
"text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.",
"title": ""
},
{
"docid": "fe65dd3bd5f11bea22c5421e84fad8da",
"text": "*This contract was given to the United States Association for Small Business and Entrepreneurship (USASBE) for a best doctoral student paper award, presented to the awardees at the USASBE annual meeting. The opinions and recommendations of the authors of this study do not necessarily reflect official policies of the SBA or other agencies of the U.S. government. Note The 2009 Office of Advocacy Best Doctoral Paper award was presented to Pankaj Patel and Rodney D'Souza, doctoral students at the University of Louisville, at the United States Association for Small Business and Entrepreneurship (USASBE) annual meeting. Purpose Export strategy has become increasingly important for SMEs in recent years. To realize the full potential of export strategy, SMEs must be able to address challenges in export markets successfully. A firm must have adequate capabilities to meet unique challenges in such efforts. However, SMEs are limited by their access to resources and capabilities. While prior studies have looked at the importance of organizational learning in export strategy, they have overlooked the firm capabilities that facilitate the use of the learning. As firms that partake in export activity are entrepreneurial in nature, these firms would benefit by proactively seeking new markets, engaging in innovative action to meet local market needs, and be able and willing to take risks by venturing into previously unknown markets. The authors of this paper propose that SMEs make use of capabilities such as entrepreneurial orientation in an attempt to reduce impediments to exporting, which in turn could lead to enhanced export performance. This study finds that proactivity and risk-taking play a role in enhancing export performance of SMEs. However, it did not find support for innovation as a factor that enhances export performance. These findings could mean that firms that are proactive in nature are better at reducing export impediments. This is because these firms are able to bring new products quickly into the marketplace, and are better able to anticipate future demand, creating a first mover advantage. The results of the study also suggest that risk-taking firms might choose strategies that move away from the status quo, thereby increasing the firm's engagement in process enhancements, new product services, innovative marketing techniques , and the like. The data for this report were collected for the National Federation of Independent Business by the executive interviewing group of The Gallup Organization. The survey focused on international trade efforts of small manufacturers …",
"title": ""
},
{
"docid": "fe8a65600caf3bdf3f5a81d2967da945",
"text": "Lane-level digital maps can simplify driving tasks for robotic cars as well as enhance performance and reliability for advanced driver assistance systems (ADAS) by providing strong priors about the driving environment. In this paper, we present a system for automatic generation of precise lane-level maps by using conventional low-cost sensors installed in most of current commercial cars. It mainly consists of two modules, i.e. road orthographic image generation and lane graph construction. First, we divide the global map into fixed local segments based on the road network topology. According to the local map segments, we accumulate the bird's eye view images of the road surface by fusing GPS, INS and visual odometry, and subsequently integrate them into synthetic orthographic images with the reference of the local map segments. Furthermore, the information of the driving lanes is extracted from the orthographic images and a large amount of vehicle trajectories, which is used to construct the lane graph of the map based on the lane models we proposed. Such a system can offer increased value as well as promote the automation level for today's commercial cars without being supplemented additional sensors. Experiments show promising results of the automatic map generation of the real-world roads, which substantiated the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "ad1d572a7ee58c92df5d1547fefba1e8",
"text": "The primary source for the blood supply of the head of the femur is the deep branch of the medial femoral circumflex artery (MFCA). In posterior approaches to the hip and pelvis the short external rotators are often divided. This can damage the deep branch and interfere with perfusion of the head. We describe the anatomy of the MFCA and its branches based on dissections of 24 cadaver hips after injection of neoprene-latex into the femoral or internal iliac arteries. The course of the deep branch of the MFCA was constant in its extracapsular segment. In all cases there was a trochanteric branch at the proximal border of quadratus femoris spreading on to the lateral aspect of the greater trochanter. This branch marks the level of the tendon of obturator externus, which is crossed posteriorly by the deep branch of the MFCA. As the deep branch travels superiorly, it crosses anterior to the conjoint tendon of gemellus inferior, obturator internus and gemellus superior. It then perforates the joint capsule at the level of gemellus superior. In its intracapsular segment it runs along the posterosuperior aspect of the neck of the femur dividing into two to four subsynovial retinacular vessels. We demonstrated that obturator externus protected the deep branch of the MFCA from being disrupted or stretched during dislocation of the hip in any direction after serial release of all other soft-tissue attachments of the proximal femur, including a complete circumferential capsulotomy. Precise knowledge of the extracapsular anatomy of the MFCA and its surrounding structures will help to avoid iatrogenic avascular necrosis of the head of the femur in reconstructive surgery of the hip and fixation of acetabular fractures through the posterior approach.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
scidocsrr
|
0b6a6c88d11505c4f43bac97cb34f9d5
|
Making isovists syntactic: isovist integration analysis
|
[
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
}
] |
[
{
"docid": "8dde3827552256660089847a547e3c80",
"text": "Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for gradeschool science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and – in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results – outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query rewriting, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.",
"title": ""
},
{
"docid": "ef742ded3107fe9c5812a7c866835117",
"text": "Much commentary has been circulating in academe regarding the research skills, or lack thereof, in members of ‘‘Generation Y,’’ the generation born between 1980 and 1994. The students currently on college campuses, as well as those due to arrive in the next few years, have grown up in front of electronic screens: television, movies, video games, computer monitors. It has been said that student critical thinking and other cognitive skills (as well as their physical well-being) are suffering because of the large proportion of time spent in sedentary pastimes, passively absorbing words and images, rather than in reading. It may be that students’ cognitive skills are not fully developing due to ubiquitous electronic information technologies. However, it may also be that academe, and indeed the entire world, is currently in the middle of a massive and wideranging shift in the way knowledge is disseminated and learned.",
"title": ""
},
{
"docid": "7d272ece462c9496d292e2df242ab493",
"text": "This study evaluated phases of adventure experiences by identifying flow and reversal theory states over a 3-day white-water river surfing course. Data were col lected with novice river surfers (n = S) via in-depth qualitative interviews using head-mounted video cameras. Findings suggested that \"opposing\" experiential phases (i.e., telic and paratelic) may be symbiotic in adventure experiences and may facilitate flow experiences. These results may account for the dynamic nature of enjoyment, flow, and motivational states within adventure experiences. Future research should seek to validate the phasic models presented herein and evaluate their potential applicability to other adventurous activities.",
"title": ""
},
{
"docid": "a2575a6a0516db2e47aab0388c5e9677",
"text": "Isaac Miller and Mark Campbell Sibley School of Mechanical and Aerospace Engineering Dan Huttenlocher and Frank-Robert Kline Computer Science Department Aaron Nathan, Sergei Lupashin, and Jason Catlin School of Electrical and Computer Engineering Brian Schimpf School of Operations Research and Information Engineering Pete Moran, Noah Zych, Ephrahim Garcia, Mike Kurdziel, and Hikaru Fujishima Sibley School of Mechanical and Aerospace Engineering Cornell University Ithaca, New York 14853 e-mail: itm2@cornell.edu, mc288@cornell.edu, dph@cs.cornell.edu, amn32@cornell.edu, fk36@cornell.edu, pfm24@cornell.edu, ncz2@cornell.edu, bws22@cornell.edu, sv15@cornell.edu, eg84@cornell.edu, jac267@cornell.edu, msk244@cornell.edu, hf86@cornell.edu",
"title": ""
},
{
"docid": "d565270afe051fd6b385fea75023b91b",
"text": "AIM\nTo document the clinicopathological characteristics and analyze the possible reasons for misdiagnosis or missed diagnosis of hepatoid adenocarcinoma of the stomach (HAS), using data from a single center.\n\n\nMETHODS\nWe retrospectively analyzed 19 patients initially diagnosed as HAS and 7 patients initially diagnosed as common gastric cancer with high levels of serum α-fetoprotein (AFP). All had undergone surgical treatment, except 3 patients only had biopsies at our hospital. Immunohistochemistry for AFP and Hepatocyte antigen was performed. Final diagnosis for these 26 patients were made after HE and immunohistochemistry slides reviewed by 2 experienced pathologists. Prognostic factors were determined by univariate analysis.\n\n\nRESULTS\nNineteen cases were confirmed to be HAS. A total of 4 out of 19 cases initially diagnosed as HAS and 4 out of 7 cases initially diagnosed as common gastric adenocarcinoma were misdiagnosed/missed diagnosed, thus, the misdiagnosis/missed diagnosis rate was 30.8% (8/26). The incidence of HAS among gastric cancer in our center was 0.19% (19/9915). Sixteen (84.2%) patients showed T stages greater than T2, 12 (70.6%) patients had positive lymph nodes in 17 available patients and 3 (15.8%) of the patients with tumors presented liver metastasis at the time of diagnosis. Histologically, cytoplasmic staining types included 10 cases of eosinophilic, 1 case of clear, 5 cases of clear mixed with eosinophilic and 3 cases of basophilic. Fourteen (73.7%) patients expressed AFP, whereas only 6 (31.6%) were hepatocyte-positive. Univariate analysis showed that N stage (HR 2.429, P=0.007) and tumor AFP expression (HR 0.428, P=0.036) were significantly associated with disease-free survival. The median overall survival time was 12.0 months, and the median disease-free survival time was 7.0 months. Four (80%) of 5 N0 patients and 2 (50%) of 4 N1 patients survived without progression, but no N2-3 patients survived.\n\n\nCONCLUSION\nHAS remains easily being misdiagnosed/missed diagnosed based on a pathological examination, probably because the condition is rare and has various cytoplasmic types. Although the survival rate for HAS is poor, a curative effect may be achieved for N0 or N1 cases.",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "08b845d6e8770e7f4ee17c977f6878d1",
"text": "PURPOSE\nThe present study describes the results of using a processed nerve allograft, Avance Nerve Graft, as an extracellular matrix scaffold for the reconstruction of lingual nerve (LN) and inferior alveolar nerve (IAN) discontinuities.\n\n\nPATIENTS AND METHODS\nA retrospective analysis of the neurosensory outcomes for 26 subjects with 28 LN and IAN discontinuities reconstructed with a processed nerve allograft was conducted to determine the treatment effectiveness and safety. Sensory assessments were conducted preoperatively and 3, 6, and 12 months after surgical reconstruction. The outcomes population, those with at least 6 months of postoperative follow-up, included 21 subjects with 23 nerve defects. The neurosensory assessments included brush stroke directional sensation, static 2-point discrimination, contact detection, pressure pain threshold, and pressure pain tolerance. Using the clinical neurosensory testing scale, sensory impairment scores were assigned preoperatively and at each follow-up appointment. Improvement was defined as a score of normal, mild, or moderate.\n\n\nRESULTS\nThe neurosensory outcomes from LNs and IANs that had been microsurgically repaired with a processed nerve allograft were promising. Of those with nerve discontinuities treated, 87% had improved neurosensory scores with no reported adverse experiences. Similar levels of improvement, 87% for the LNs and 88% for the IANs, were achieved for both nerve types. Also, 100% sensory improvement was achieved in injuries repaired within 90 days of the injury compared with 77% sensory improvement in injuries repaired after 90 days.\n\n\nCONCLUSIONS\nThese results suggest that processed nerve allografts are an acceptable treatment option for reconstructing trigeminal nerve discontinuities. Additional studies will focus on reviewing the outcomes of additional cases.",
"title": ""
},
{
"docid": "232d020c8b006063151050f3c5a67a3d",
"text": "An experimental approach to cut-mark investigation has proved particularly successful and should arguably be a prerequisite for individuals interested in developing standard methods to study butchery data. This paper offers a brief review of the criteria used to investigate cut marks and subsequently outlines recent research that has integrated results from replication studies of archaeological tools and cut marks with written resources to study historic butchery practices. The case is made for a degree of standardization to be incorporated into the recording of butchery data and for the integration of evidence from the analysis of cut marks and tool signatures. While the call for standardization is not without precedent the process would benefit from a suitable model: one is proposed herein based in large part on experimental replication and personal vocational experience gained in the modern butchery trade. Furthermore, the paper identifies issues that need to be kept at the forefront of an experimental approach to butchery investigation and places emphasis on the use of modern analogy and cultural theory as a means of improving our interpretation of cut-mark data.",
"title": ""
},
{
"docid": "c70c814c8b509b3635089387332fb374",
"text": "We have investigated the electromagnetic properties of a 3D wire mesh in a geometry rese covalently bonded diamond. The frequency and wave vector dispersion show forbidden bands a frequenciesn0, corresponding to the lattice spacing, just as dielectric photonic crystals do. But have a new forbidden band which commences at zero frequency and extends, in our geome , 12 n0, acting as a type of plasma cutoff frequency. Wire mesh photonic crystals appear to sup longitudinal plane wave, as well as two transverse plane waves. We identify an important new r for microwave photonic crystals, an effective medium limit, in which electromagnetic waves pene deeply into the wire mesh through the aid of an impurity band.",
"title": ""
},
{
"docid": "7aa6b9cb3a7a78ec26aff130a1c9015a",
"text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.",
"title": ""
},
{
"docid": "03dc2c32044a41715991d900bb7ec783",
"text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.",
"title": ""
},
{
"docid": "37b165b08544f801d7f37a926c75d828",
"text": "This study aims to investigate how customers perceive and adopt internet banking (IB) in Jordan. An extended Model, based on the Technology Acceptance Model (TAM), was developed. Three more constructs were added to the Model, namely; Perceived Risk (PR), Perceived Trust (PT) and Bank Credibility (BC). To empirically test the Model’s ability to predict customers’ intention to adopt and use internet banking, a questionnaire was developed and used. A randomly 500 graduate students at four Jordanian Universities were surveyed. An exploratory factor analysis, correlation matrix, and a regression analysis were used to test the robustness of the model as well as to test the hypothesized relationships among variables. The results provide support to the extended TAM model and confirm its robustness in predicting customers’ intention to adopt and use internet banking. This study contributes to the body of literature about internet banking, and its results provide useful information for bank managers on how to deal with internet challenges in Jordan. Since this empirical study was performed with a time constraint, it is not without limitation.",
"title": ""
},
{
"docid": "7ac1412d56f00fd2defb4220938d9346",
"text": "Coingestion of protein with carbohydrate (CHO) during recovery from exercise can affect muscle glycogen synthesis, particularly if CHO intake is suboptimal. Another potential benefit of protein feeding is an increased synthesis rate of muscle proteins, as is well documented after resistance exercise. In contrast, the effect of nutrient manipulation on muscle protein kinetics after aerobic exercise remains largely unexplored. We tested the hypothesis that ingesting protein with CHO after a standardized 2-h bout of cycle exercise would increase mixed muscle fractional synthetic rate (FSR) and whole body net protein balance (WBNB) vs. trials matched for total CHO or total energy intake. We also examined whether postexercise glycogen synthesis could be enhanced by adding protein or additional CHO to a feeding protocol that provided 1.2 g CHO x kg(-1) x h(-1), which is the rate generally recommended to maximize this process. Six active men ingested drinks during the first 3 h of recovery that provided either 1.2 g CHO.kg(-1).h(-1) (L-CHO), 1.2 g CHO + 0.4 g protein x kg(-1) x h(-1) (PRO-CHO), or 1.6 g CHO x kg(-1) x h(-1) (H-CHO) in random order. Based on a primed constant infusion of l-[ring-(2)H(5)]phenylalanine, analysis of biopsies (vastus lateralis) obtained at 0 and 4 h of recovery showed that muscle FSR was higher (P < 0.05) in PRO-CHO (0.09 +/- 0.01%/h) vs. both L-CHO (0.07 +/- 0.01%/h) and H-CHO (0.06 +/- 0.01%/h). WBNB assessed using [1-(13)C]leucine was positive only during PRO-CHO, and this was mainly attributable to a reduced rate of protein breakdown. Glycogen synthesis rate was not different between trials. We conclude that ingesting protein with CHO during recovery from aerobic exercise increased muscle FSR and improved WBNB, compared with feeding strategies that provided CHO only and were matched for total CHO or total energy intake. However, adding protein or additional CHO to a feeding strategy that provided 1.2 g CHO x kg(-1) x h(-1) did not further enhance glycogen resynthesis during recovery.",
"title": ""
},
{
"docid": "1831e2a5a75fc85299588323d68947b2",
"text": "The Transaction Processing Performance Council (TPC) is completing development of TPC-DS, a new generation industry standard decision support benchmark. The TPC-DS benchmark, first introduced in the “The Making of TPC-DS” [9] paper at the 32 International Conference on Very Large Data Bases (VLDB), has now entered the TPC’s “Formal Review” phase for new benchmarks; companies and researchers alike can now download the draft benchmark specification and tools for evaluation. The first paper [9] gave an overview of the TPC-DS data model, workload model, and execution rules. This paper details the characteristics of different phases of the workload, namely: database load, query workload and data maintenance; and also their impact to the benchmark’s performance metric. As with prior TPC benchmarks, this workload will be widely used by vendors to demonstrate their capabilities to support complex decision support systems, by customers as a key factor in purchasing servers and software, and by the database community for research and development of optimization techniques.",
"title": ""
},
{
"docid": "715bdb66da243d731c4a6a5cf56e4711",
"text": "Current PCand web-based applications provide insufficient security for the information they access, because vulnerabilities anywhere in a large client software stack can compromise confidentiality and integrity. We propose a new architecture for secure applications, Cloud Terminal, in which the only software running on the end host is a lightweight secure thin terminal, and most application logic is in a remote cloud rendering engine. The secure thin terminal has a very small TCB (23 KLOC) and no dependence on the untrusted OS, so it can be easily checked and remotely attested to. The terminal is also general-purpose: it simply supplies a secure display and input path to remote software. The cloud rendering engine runs an off-the-shelf application in a restricted VM hosted by the provider, but resource sharing between VMs lets one server support hundreds of users. We implement a secure thin terminal that runs on standard PC hardware and provides a responsive interface to applications like banking, email, and document editing. We also show that our cloud rendering engine can provide secure online banking for 5–10 cents per user per month.",
"title": ""
},
{
"docid": "375e3e87087290cef4fe9445184f2c91",
"text": "BACKGROUND\nDermal gel extra (DGE) is a new, tightly cross-linked hyaluronic acid (HA)-based dermal filler containing lidocaine engineered to resist gel deformation and degradation.\n\n\nOBJECTIVES\nTo develop a firmer gel product (DGE) and compare the efficacy and safety of DGE with nonanimal stabilized HA (NASHA) for correction of nasolabial folds (NLFs).\n\n\nMETHODS\nDGE physical properties were characterized, and 140 subjects with moderate to deep NLFs were treated with DGE and NASHA in a randomized, multicenter, split-face design study. Efficacy, pain, and satisfaction were measured using appropriate standard instruments. Adverse events were monitored throughout the study.\n\n\nRESULTS\nDGE has a higher modulus and a higher gel:fluid ratio than other HA fillers. Similar optimal correction was observed with DGE and NASHA through 36 weeks (9 months). Study subjects required less volume (p<.001) and fewer touch-ups (p=.005) and reported less injection pain (p<.001) with DGE treatment. Most adverse events were mild to moderate skin reactions.\n\n\nCONCLUSIONS\nDGE is a firm HA gel that required significantly less volume and fewer touch-ups to provide equivalent efficacy to NASHA for NLF correction; both dermal gels were well tolerated. DGE will provide a comfortable and cost-effective dermal filler option for clinicians and patients.",
"title": ""
},
{
"docid": "054cde7ac85562e1f96e69f0d769de29",
"text": "Research on the impact of nocturnal road traffic noise on sleep and the consequences on daytime functioning demonstrates detrimental effects that cannot be ignored. The physiological reactions due to continuing noise processing during night time lead to primary sleep disturbances, which in turn impair daytime functioning. This review focuses on noise processing in general and in relation to sleep, as well as methodological aspects in the study of noise and sleep. More specifically, the choice of a research setting and noise assessment procedure is discussed and the concept of sleep quality is elaborated. In assessing sleep disturbances, we differentiate between objectively measured and subjectively reported complaints, which demonstrates the need for further understanding of the impact of noise on several sleep variables. Hereby, mediating factors such as noise sensitivity appear to play an important role. Research on long term effects of noise intrusion on sleep up till now has mainly focused on cardiovascular outcomes. The domain might benefit from additional longitudinal studies on deleterious effects of noise on mental health and general well-being.",
"title": ""
},
{
"docid": "bf241075beac4fedfb0ad9f8551c652d",
"text": "This paper discloses a new very broadband compact transition between double-ridge waveguide and coaxial line. The transition includes an original waveguide to coaxial mode converter and modified impedance transformer. Very good performance is predicted theoretically and confirmed experimentally over a 3:1 bandwidth.",
"title": ""
},
{
"docid": "af40c4fe439738a72ee6b476aeb75f82",
"text": "Object tracking is still a critical and challenging problem with many applications in computer vision. For this challenge, more and more researchers pay attention to applying deep learning to get powerful feature for better tracking accuracy. In this paper, a novel triplet loss is proposed to extract expressive deep feature for object tracking by adding it into Siamese network framework instead of pairwise loss for training. Without adding any inputs, our approach is able to utilize more elements for training to achieve more powerful feature via the combination of original samples. Furthermore, we propose a theoretical analysis by combining comparison of gradients and back-propagation, to prove the effectiveness of our method. In experiments, we apply the proposed triplet loss for three real-time trackers based on Siamese network. And the results on several popular tracking benchmarks show our variants operate at almost the same frame-rate with baseline trackers and achieve superior tracking performance than them, as well as the comparable accuracy with recent state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "782d654c2a5503bcfc9b7b88514606ec",
"text": "This paper presents a completely integrated, low-power 6.3 GHz oscillator transmitter which includes an on-chip antenna suitable for short-range medical sensor applications. The transmitter, implemented in a 1.2 V 0.13 mum CMOS process, utilizes open-loop direct VCO modulation for BFSK data at a rate of 300 kbps. For communicating a 1 kbit packet once per second, an average power consumption of 14 muW is achieved. During a packet transmission, the power consumption of the transmitter is 4.25 mW, enabling a self-powered design using integrated ultracapacitors for an SoC solution. With a radiated power of 0 dBm, the transmitter has a communication range of 2 m.",
"title": ""
}
] |
scidocsrr
|
139fdbeb15f658c3bf1eeda601608b47
|
Natural Language Interface for Databases Using a Dual-Encoder Model
|
[
{
"docid": "db65e9771d00293e21fe96c99a4896c5",
"text": "Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequenceto-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the “order-matters” problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task.",
"title": ""
}
] |
[
{
"docid": "541075ddb29dd0acdf1f0cf3784c220a",
"text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1",
"title": ""
},
{
"docid": "9b18a0a598ad745c5abb08826a700be5",
"text": "The paper draws on in-depth qualitative. comments from student evaluation of an e-learning module on an MSc in Information Technologies and Management, to develop a picture of their perspective on the experience. Questionnaires that yielded some basic quantitative data and a rich seam of qualitative data were administered. General questions on satisfaction and dissatisfaction identified the criteria that student used in evaluation, while specific questions of aspects of the module generated some insights into the student learning process. The criteria used by students when expressing satisfaction are: synergy between theory and practice; specific subject themes; discussion forums and other student interaction; and, other learning support. The themes that are associated with dissatisfaction include: robustness and usability of platform; access to resources (such as articles and books); currency of study materials; and, student work scheduling. Aspects of the student learning experience that should inform the development of e-learning include: each student engages differently; printing means that students use the integrated learning environment as a menu; discussion threads and interaction are appreciated, but students are unsure in making contributions; and, expectations about the tutor’s role in e-learning are unformed. Introduction There has been considerable interest in the potential for the development of e-learning in universities, schools (eg, Crook, 1998; DfES, 2003; Roussos, 1997), further education and the workplace (eg, Hughes & Attwell, 2003; Morgan, 2001; Sambrook, 2001). The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding areas of education and training, in both education and industry (Imel, 2002). Education and training is poised to become one of the largest sectors in the world economy. e-Learning is being recognised as having the power to transform the performance, knowledge and skills landscape (Gunasekaran, McNeil & Shaul, 2002). e-Learning is viewed variously as British Journal of Educational Technology Vol 38 No 4 2007 560–573 doi:10.1111/j.1467-8535.2007.00723.x © 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. having the potential to: improve the quality of learning; improve access to education and training; reduce the cost of education; and, improve the cost-effectiveness of education (Alexander, 2001). The research project reported in this paper is a contribution to the extension of understanding of the student experience of e-learning. Qualitative data was collected from learners to offer insights into their perceptions and expectations of the e-learning experience. The students chosen for this analysis are students on a module: Successful Information Systems on an MSc in Information Technologies and Management that was delivered in e-learning mode. These students are, by disciplinary background, IT (Information Technology) literate, are unlikely to be phased by the platform, and are mature students in work, who are studying part-time. They are typical of the students for whom it is widely proposed that e-learning is the most convenient and appropriate mode of delivery. Nevertheless, despite some very positive reports on the outcomes of the course in terms of its impact to working practice and within students’ organisations, there are a number of aspects of the student engagement with and experience of the course that offer insights into students’ practices that are worthy of further analysis and comment and may be of value to others delivering e-learning to international learning groups or communities. This paper starts with a literature review focussing on earlier work on e-learning practice and evaluation. The methodology is described, followed by an analysis of the results. Conclusions and recommendations for future research focus on the development of our understanding the criteria applied by students in evaluating an e-learning experience, and key aspects of the way in which students engage with an e-learning course.",
"title": ""
},
{
"docid": "b9404d66fa6cc759382c73d6ae16fc0c",
"text": "Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.",
"title": ""
},
{
"docid": "8b15435562b287eb97a6c573222797ec",
"text": "Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.",
"title": ""
},
{
"docid": "7e884438ee8459a441cbe1500f1bac88",
"text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.",
"title": ""
},
{
"docid": "e29296607c63951174a7a5e942f653c7",
"text": "Corresponding author: Douglas Kunda Mulungushi University, School of Science, Engineering and Technology, Kabwe, Zambia Email: dkunda@mu.edu.zm Abstract: Agile development is a software development process that advocates adaptive planning, early delivery, evolutionary development and continuous betterment and supports rapid and flexible response to change. The purpose of Agile development is minimize project failure through customer interactions and responding to change. However, Agile development is vulnerable to failure because of a number of factors and these factors can be categorized under four dimensions, namely; organizational, people, process and technical. This paper reports the result of a study aimed at identifying factors that influence success and/or failure of Agile development in a developing country, Zambia. A multiple case study approach and grounded theory approach was used for this case study. The study shows that there are challenges that are unique to developing countries and therefore measures should be developed to address these unique problems when implementing Agile projects in developing countries.",
"title": ""
},
{
"docid": "fec3feb40d363535955a9ac4234c4126",
"text": "This article presents metrics from two Hewlett-Packard (HP) reuse programs that document the improved quality, increased productivity, shortened time-to-market, and enhanced economics resulting from reuse. Work products are the products or by-products of the software-development process: for example, code, design, and test plans. Reuse is the use of these work products without modification in the development of other software. Leveraged reuse is modifying existing work products to meet specific system requirements. A producer is a creator of reusable work products, and the consumer is someone who uses them to create other software. Time-to-market is the time it takes to deliver a product from the time it is conceived. Experience with reuse has been largely positive. Because work products are used multiple times, the accumulated defect fixes result in a higher quality work product. Because the work products have already been created, tested, and documented, productivity increases because consumers of reusable work products need to do less work. However, increased productivity from reuse does not necessarily shorten time-to-market. To reduce time-to-market, reuse must be used effectively on the critical path of a development project. Finally, we have found that reuse allows an organization to use personnel more effectively because it leverages expertise. However, software reuse is not free. It requires resources to create and maintain reusable work products, a reuse library, and reuse tools. To help evaluate the costs and benefits of reuse, we have developed an economic analysis method, which we have applied to multiple reuse programs at HP.<<ETX>>",
"title": ""
},
{
"docid": "d2a12e7e0ae8a743e24252829e698d19",
"text": "This paper presents a comparison between RSA and ElGamal based untraceable blind signature (BS) schemes through simulation. The objective is to provide a guideline while selecting either of them to develop an application. A BS scheme is a cryptographic protocol that can be used in cryptographic applications like electronic voting systems, electronic payment systems etc to conduct their privacy-related transactions anonymously but securely. While a user operates her electronic transactions employing a BS scheme over the internet, the BS scheme ensures the confidentiality of the secret message of the user. Besides, untraceability is a crucial criterion for any BS scheme because thereby the signer of this scheme is unable to link the message-signature pair after the BS has been revealed to the public. Two untraceable BS schemes: one is proposed by Hwang et al. and is based on RSA cryptosystem whereas the other is proposed by Lee et al. and is based on ElGamal cryptosystem have been chosen here for simulation. The outcome of the simulation model is the comparison of computation time requirement of blinding, singing, unblinding and verification phases of the chosen BS schemes.",
"title": ""
},
{
"docid": "6c29713df5186553bee555024bf8c135",
"text": "This paper describes the organization and results of the automatic keyphrase extraction task held at the workshop on Semantic Evaluation 2010 (SemEval-2010). The keyphrase extraction task was specifically geared towards scientific articles. Systems were automatically evaluated by matching their extracted keyphrases against those assigned by the authors as well as the readers to the same documents. We outline the task, present the overall ranking of the submitted systems, and discuss the improvements to the state-of-the-art in keyphrase extraction.",
"title": ""
},
{
"docid": "e281a8dc16b10dff80fad36d149a8a2f",
"text": "We present a tree router for multichip systems that guarantees deadlock-free multicast packet routing without dropping packets or restricting their length. Multicast routing is required to efficiently connect massively parallel systems' computational units when each unit is connected to thousands of others residing on multiple chips, which is the case in neuromorphic systems. Our tree router implements this one-to-many routing by branching recursively-broadcasting the packet within a specified subtree. Within this subtree, the packet is only accepted by chips that have been programmed to do so. This approach boosts throughput because memory look-ups are avoided enroute, and keeps the header compact because it only specifies the route to the subtree's root. Deadlock is avoided by routing in two phases-an upward phase and a downward phase-and by restricting branching to the downward phase. This design is the first fully implemented wormhole router with packet-branching that can never deadlock. The design's effectiveness is demonstrated in Neurogrid, a million-neuron neuromorphic system consisting of sixteen chips. Each chip has a 256 × 256 silicon-neuron array integrated with a full-custom asynchronous VLSI implementation of the router that delivers up to 1.17 G words/s across the sixteen-chip network with less than 1 μs jitter.",
"title": ""
},
{
"docid": "be0033b0f251970f8a8876b28cd2042e",
"text": "A power transformer will yield a frequency response which is unique to its mechanical geometry and electrical properties. Changes in the frequency response of a transformer can be potential indicators of winding deformation as well as other structural and electrical problems. A diagnostic tool which leverages this knowledge in order to detect such changes is frequency-response analysis (FRA). To date, FRA has been used to identify changes in a transformer's frequency response but with limited insight into the underlying cause of the change. However, there is now a growing research interest in specifically identifying the structural change in a transformer directly from its FRA signature. The aim of this paper is to support FRA interpretation through the development of wideband three-phase transformer models which are based on three types of FRA tests. The resulting models can be used as a flexible test bed for parameter sensitivity analysis, leading to greater insight into the effects that geometric change can have on transformer FRA. This paper will demonstrate the applicability of this modeling approach by simultaneously fitting each model to the corresponding FRA data sets without a priori knowledge of the transformer's internal dimensions, and then quantitatively assessing the accuracy of key model parameters.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "d67c55988d7e1fa06579e1f4b4a343ba",
"text": "Web content extraction is a key technology for enabling an array of applications aimed at understanding the web. While automated web extraction has been studied extensively, they often focus on extracting structured data that appear multiple times on a single webpage, like product catalogs. This project aims to extract less structured web content, like news articles, that appear only once in noisy webpages. Our approach classifies text blocks using a mixture of visual and language independent features. In addition, a pipeline is devised to automatically label datapoints through clustering where each cluster is scored based on its relevance to the webpage description extracted from the meta tags, and datapoints in the best cluster are selected as positive training examples.",
"title": ""
},
{
"docid": "050dd71858325edd4c1a42fc1a25de95",
"text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.",
"title": ""
},
{
"docid": "b86ea36ee5a3b6c27713de3f809841b8",
"text": "From a group of 1,189 AA patients seen in our dermatology unit, thirteen (3 males, 10 females) experienced hair shedding that started profusely and diffusely over the entire scalp. They were under observation for about 5 years, histopathology and trichograms being performed in all instances. The mean age of the patients was 26.7 years. It took only 2.3 months on average from the onset of hair shedding to total denudation of the scalp. The trichogram at the time of diffuse shedding showed that about 80% had dystrophic roots and the remaining 20% had telogen roots. Histopathological findings and exclamation mark hairs were compatible with alopecia areata. Regrowth of hair was noted 3.2 month after the onset of hair shedding and recovery observed in 4.8 months. All patients were treated by methylprednisolone pulse therapy. During the follow-up period, 53 months on average after recovery, 8 of the 13 patients (61.5%) showed normal scalp hair without recurrence, in 4 patients the recovery was cosmetically acceptable in spite of focal recurrences and only 1 patient showed a severe relapse after recovery. Considering all of the above findings, this group of the patients should be delineated by the term acute alopecia totalis.",
"title": ""
},
{
"docid": "97531e5a9bbe4d6e5e495fbbc380b3cd",
"text": "Nowadays, more and more users keep up with news through information streams coming from real-time micro-blogging activity offered by services such as Twitter. In these sites, information is shared via a followers/followees social network structure in which a follower will receive all the micro-blogs from the users he follows, named fol-lowees. Recent research efforts on understanding micro-blogging as a novel form of communication and news spreading medium have identified different categories of users in Twitter: information sources, information seekers and friends. Users acting as information sources are characterized for having a larger number of followers than follo-wees, information seekers subscribe to this kind of users but rarely post tweets and, finally, friends are users exhibiting reciprocal relationships. With information seekers being an important portion of registered users in the system, finding relevant and reliable sources becomes essential. To address this problem, we propose a followee recommender system based on an algorithm that explores the topol-ogy of followers/followees network of Twitter considering different factors that allow us to identify users as good information sources. Experimental evaluation conducted with a group of users is reported , demonstrating the potential of the approach .",
"title": ""
},
{
"docid": "923320f7061b9141f2a322a8ff54b0e1",
"text": "Our goal in this paper is to develop a practical framework for obtaining a uniform sample of users in an online social network (OSN) by crawling its social graph. Such a sample allows to estimate any user property and some topological properties as well. To this end, first, we consider and compare several candidate crawling techniques. Two approaches that can produce approximately uniform samples are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground truth.\" In contrast, using Breadth-First-Search (BFS) or an unadjusted Random Walk (RW) leads to substantially biased results. Second, and in addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these diagnostics can be used to effectively determine when a random walk sample is of adequate size and quality. Third, as a case study, we apply the above methods to Facebook and we collect the first, to the best of our knowledge, representative sample of Facebook users. We make it publicly available and employ it to characterize several key properties of Facebook.",
"title": ""
},
{
"docid": "15ddb8cb5e82e0efde197908420bb8d0",
"text": "In recent years, there has been much interest in learning Bayesian networks from data. Learning such models is desirable simply because there is a wide array of off-the-shelf tools that can apply the learned models as expert systems, diagnosis engines, and decision support systems. Practitioners also claim that adaptive Bayesian networks have advantages in their own right as a non-parametric method for density estimation, data analysis, pattern classification, and modeling. Among the reasons cited we find: their semantic clarity and understandability by humans, the ease of acquisition and incorporation of prior knowledge, the ease of integration with optimal decision-making methods, the possibility of causal interpretation of learned models, and the automatic handling of noisy and missing data. In spite of these claims, and the initial success reported recently, methods that learn Bayesian networks have yet to make the impact that other techniques such as neural networks and hidden Markov models have made in applications such as pattern and speech recognition. In this paper, we challenge the research community to identify and characterize domains where induction of Bayesian networks makes the critical difference, and to quantify the factors that are responsible for that difference. In addition to formalizing the challenge, we identify research problems whose solution is, in our view, crucial for meeting this challenge.",
"title": ""
},
{
"docid": "745a3278d096c4cea9fb6c15e876931f",
"text": "Much of the success of single agent deep reinforcement learning (DRL) in recent years can be attributed to the use of experience replay memories (ERM), which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling stored state transitions. However, care is required when using ERMs for multi-agent deep reinforcement learning (MA-DRL), as stored transitions can become outdated because agents update their policies in parallel [11]. In this work we apply leniency [23] to MA-DRL. Lenient agents map state-action pairs to decaying temperature values that control the amount of leniency applied towards negative policy updates that are sampled from the ERM. This introduces optimism in the valuefunction update, and has been shown to facilitate cooperation in tabular fully-cooperative multi-agent reinforcement learning problems.We evaluate our Lenient-DQN (LDQN) empirically against the related Hysteretic-DQN (HDQN) algorithm [22] as well as a modified version we call scheduled-HDQN, that uses average reward learning near terminal states. Evaluations take place in extended variations of the Coordinated Multi-Agent Object Transportation Problem (CMOTP) [8] which include fully-cooperative sub-tasks and stochastic rewards. We find that LDQN agents are more likely to converge to the optimal policy in a stochastic reward CMOTP compared to standard and scheduled-HDQN agents.",
"title": ""
},
{
"docid": "35c7cb1e50059c3e77fcee20ed663234",
"text": "Electronic discovery is an interesting sub problem of information retrieval in which one identifies documents that are potentially relevant to issues and facts of a legal case from an electronically stored document collection (a corpus). In this paper, we consider representing documents in a topic space using the well-known topic models such as latent Dirichlet allocation and latent semantic indexing, and solving the information retrieval problem via finding document similarities in the topic space rather doing it in the corpus vocabulary space. We also develop an iterative SMART ranking and categorization framework including human-in-the-loop to label a set of seed (training) documents and using them to build a semi-supervised binary document classification model based on Support Vector Machines. To improve this model, we propose a method for choosing seed documents from the whole population via an active learning strategy. We report the results of our experiments on a real dataset in the electronic",
"title": ""
}
] |
scidocsrr
|
b95c9c9d60fd21e0175319ee54a82445
|
Detection of false data injection attacks in smart-grid systems
|
[
{
"docid": "ac222a5f8784d7a5563939077c61deaa",
"text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.",
"title": ""
},
{
"docid": "002aec0b09bbd2d0e3453c9b3aa8d547",
"text": "It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.",
"title": ""
}
] |
[
{
"docid": "62bf93deeb73fab74004cb3ced106bac",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "960360bd445566c4581c1ae021ee64d5",
"text": "Artwork is a mode of creative expression and this paper is particularly interested in investigating if machine can learn and synthetically create artwork that are usually nonfigurative and structured abstract. To this end, we propose an extension to the Generative Adversarial Network (GAN), namely as the ArtGAN to synthetically generate high quality artwork. This is in contrast to most of the current solutions that focused on generating structural images such as birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the categorical autoencoder-based discriminator that incorporates an autoencoder into the categorical discriminator for additional complementary information. In order to synthesize a high resolution artwork, we include a novel magnified learning strategy to improve the correlations between neighbouring pixels. Based on visual inspection and Inception scores, we demonstrate that ArtGAN is able to draw high resolution and realistic artwork, as well as generate images of much higher quality in four other datasets (i.e. CIFAR-10, STL-10, Oxford-102 and CUB-200).",
"title": ""
},
{
"docid": "21122ab1659629627c46114cc5c3b838",
"text": "The introduction of more onboard autonomy in future single and multi-satellite missions is both a question of limited onboard resources and of how far can we actually thrust the autonomous functionalities deployed on board. In-flight experience with nasa's Deep Space 1 and Earth Observing 1 has shown how difficult it is to design, build and test reliable software for autonomy. The degree to which system-level onboard autonomy will be deployed in the single and multi satellite systems of tomorrow will depend, among other things, on the progress made in two key software technologies: autonomous onboard planning and robust execution. Parallel to the developments in these two areas, the actual integration of planning and execution engines is still nowadays a crucial issue in practical application. This paper presents an onboard autonomous model-based executive for execution of time-flexible plans. It describes its interface with an apsi-based timeline-based planner, its control approaches, architecture and its modelling language as an extension of apsl's ddl. In addition, it introduces a modified version of the classical blocks world toy planning problem which has been extended in scope and with a runtime environment for evaluation of integrated planning and executive engines.",
"title": ""
},
{
"docid": "55d7db89621dc57befa330c6dea823bf",
"text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.",
"title": ""
},
{
"docid": "285a1c073ec4712ac735ab84cbcd1fac",
"text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.",
"title": ""
},
{
"docid": "448285428c6b6cfca8c2937d8393eee5",
"text": "Swarm robotics is a novel approach to the coordination of large numbers of robots and has emerged as the application of swarm intelligence to multi-robot systems. Different from other swarm intelligence studies, swarm robotics puts emphases on the physical embodiment of individuals and realistic interactions among the individuals and between the individuals and the environment. In this chapter, we present a brief review of this new approach. We first present its definition, discuss the main motivations behind the approach, as well as its distinguishing characteristics and major coordination mechanisms. Then we present a brief review of swarm robotics research along four axes; namely design, modelling and analysis, robots and problems.",
"title": ""
},
{
"docid": "6b4fcc3075d2fcf02b7d570fa5a88a58",
"text": "Vehicular Ad-hoc Network (VANET) is a new application of Mobile Ad-hoc Network (MANET) in the field of Inter-vehicle communication. As the high mobility of vehicles, some traditional MANET routing protocols may not fit the VANET. In this paper, we propose a cluster-based directional routing protocol (CBDRP) for highway scenarios, in which the header of a cluster selects another header according to the moving direction of vehicle to forward packets. Simulation results shows the CBDRP can solve the problem of link stability in VANET, realizing reliable and rapid data transmission.",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "35d7da51ad184250d4cd219ab32f0b5e",
"text": "This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector ~ith a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.",
"title": ""
},
{
"docid": "b2d749c5b27e065922433fe6fb6462ee",
"text": "In this paper, a fast adaptive neural network classifier named FANNC is proposed. FANNC exploits the advantages of both adaptive resonance theory and field theory. It needs only one-pass learning, and achieves not only high predictive accuracy but also fast learning speed. Besides, FANNC has incremental learning ability. When new instances are fed, it does not need to retrain the whole training set. Instead, it could learn the knowledge encoded in those instances through slightly adjusting the network topology when necessary, that is, adaptively appending one or two hidden units and corresponding connections to the existing network. This characteristic makes FANNC fit for real-time online learning tasks. Moreover, since the network architecture is adaptively set up, the disadvantage of manually determining the number of hidden units of most feed-forward neural networks is overcome. Benchmark tests show that FANNC is a preferable neural network classifier, which is superior to several other neural algorithms on both predictive accuracy and learning speed.",
"title": ""
},
{
"docid": "90a7849b9e71df0cb9c4b77c369592db",
"text": "Social networking and microblogging services such as Twitter provide a continuous source of data from which useful information can be extracted. The detection and characterization of bursty words play an important role in processing such data, as bursty words might hint to events or trending topics of social importance upon which actions can be triggered. While there are several approaches to extract bursty words from the content of messages, there is only little work that deals with the dynamics of continuous streams of messages, in particular messages that are geo-tagged.\n In this paper, we present a framework to identify bursty words from Twitter text streams and to describe such words in terms of their spatio-temporal characteristics. Using a time-aware word usage baseline, a sliding window approach over incoming tweets is proposed to identify words that satisfy some burstiness threshold. For these words then a time-varying, spatial signature is determined, which primarily relies on geo-tagged tweets. In order to deal with the noise and the sparsity of geo-tagged tweets, we propose a novel graph-based regularization procedure that uses spatial cooccurrences of bursty words and allows for computing sound spatial signatures. We evaluate the functionality of our online processing framework using two real-world Twitter datasets. The results show that our framework can efficiently and reliably extract bursty words and describe their spatio-temporal evolution over time.",
"title": ""
},
{
"docid": "b9e8dc2492a0d91f1f7b9866f38235ab",
"text": "As the interconnect cross-sections are ever scaled down, a particular care must be taken on the tradeoff between increase of current density in the back end of line and reliability to prevent electromigration (EM). Some lever exists as the well-known Blech effect [1]. One can take advantage of the EM induced backflow flux that counters the EM flux. As a consequence, the total net flux in the line is reduced and additional current density in designs can be allowed in short lines. However, the immortality condition is most of the time addressed with a standard test structures ended by two vias [2]–[3]. Designs present complex configurations far from this typical case and the Blech product (jL)c can be deteriorated or enhanced [4]. In the present paper, we present our study of EM performances of short lines ended by an inactive end of line (EOL) at one end of the test structure. Significant differences on the median time to failure (MTF) are observed with respect to the current direction, from a quasi deletion of failure to a significant reduction of the Blech effect. Based on the resistance saturation, a method is proposed to determine effective lengths of inactive EOL configurations corresponding to the standard case.",
"title": ""
},
{
"docid": "843ea8a700adf545288175c1062107bb",
"text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.",
"title": ""
},
{
"docid": "4b18d2665f1bc6e9576237d88e15c74e",
"text": "Beta Regression, an extension of generalized linear models, can estimate the effect of explanatory variables on data falling within the (0,1) interval. Recent developments in Beta Regression theory extend the support interval to now include 0 and 1. The %Beta_Regression macro is updated to now allow for Zero-One Inflated Beta Regression.",
"title": ""
},
{
"docid": "ec673efa5f837ba4c997ee7ccd845ce1",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "0b01870332dd93897fbcecb9254c40b9",
"text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.",
"title": ""
},
{
"docid": "eb6f055399614a4e0876ffefae8d6a28",
"text": "For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.",
"title": ""
},
{
"docid": "75e794b731685064820c79f4d68ed79b",
"text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.",
"title": ""
},
{
"docid": "b85e9ef3652a99e55414d95bfed9cc0d",
"text": "Regulatory T cells (Tregs) prevail as a specialized cell lineage that has a central role in the dominant control of immunological tolerance and maintenance of immune homeostasis. Thymus-derived Tregs (tTregs) and their peripherally induced counterparts (pTregs) are imprinted with unique Forkhead box protein 3 (Foxp3)-dependent and independent transcriptional and epigenetic characteristics that bestows on them the ability to suppress disparate immunological and non-immunological challenges. Thus, unidirectional commitment and the predominant stability of this regulatory lineage is essential for their unwavering and robust suppressor function and has clinical implications for the use of Tregs as cellular therapy for various immune pathologies. However, recent studies have revealed considerable heterogeneity or plasticity in the Treg lineage, acquisition of alternative effector or hybrid fates, and promotion rather than suppression of inflammation in extreme contexts. In addition, the absolute stability of Tregs under all circumstances has been questioned. Since these observations challenge the safety and efficacy of human Treg therapy, the issue of Treg stability versus plasticity continues to be enthusiastically debated. In this review, we assess our current understanding of the defining features of Foxp3(+) Tregs, the intrinsic and extrinsic cues that guide development and commitment to the Treg lineage, and the phenotypic and functional heterogeneity that shapes the plasticity and stability of this critical regulatory population in inflammatory contexts.",
"title": ""
}
] |
scidocsrr
|
f873879d7ab04fc97d9d16d9a84fbb4a
|
Excessive Long-Time Deflections of Prestressed Box Girders . I : Record-Span Bridge in Palau and Other Paradigms
|
[
{
"docid": "40533c0a32bd67ae4e63ddd5f0a92506",
"text": "Synopsis: The present paper presents in chapter 1 a model for the characterization of concrete creep and shrinkage in design of concrete structures (Model B3), which is simpler, agrees better with the experimental data and is better theoretically justified than the previous models. The model complies with the general guidelines recently formulated by RILEM TC-107ß1. Justifications of various aspects of the model and diverse refinements are given in Chapter 2, and many simple explanations are appended in the commentary at the end of Chapter 1 (these parts do not to be read by those who merely want to apply the model). The prediction model B3 is calibrated by a computerized data bank comprising practically all the relevant test data obtained in various laboratories throughout the world. The coefficients of variation of the deviations of the model from the data are distinctly smaller than those for the latest CEB model (1990), and much smaller than those for the previous model in ACI 209 (which was developed in the mid-1960’s). The model is simpler than the previous models (BP and BPKX) developed at Northwestern University, yet it has comparable accuracy and is more rational. The effect of concrete composition and design strength on the model parameters is the main source of error of the model. A method to reduce this error by updating one or two model parameters on the basis of short-time creep tests is given. The updating of model parameters is particularly important for high-strength concretes and other special concretes containing various admixtures, superplasticizers, water-reducing agents and pozzolanic materials. For the updating of shrinkage prediction, a new method in which the shrinkage half-time is calibrated by simultaneous measurements of water loss is presented. This approach circumvents the large sensitivity of the shrinkage extrapolation problem to small changes in the material parameters. The new model allows a more realistic assessment of the creep and shrinkage effects in concrete structures, which significantly affect the durability and long-time serviceability of civil engineering infrastructure.",
"title": ""
}
] |
[
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "1643d808d96ac237a8e1d17704888f16",
"text": "Change is crucial for organizations in growing, highly competitive business environments. Theories of change describe the effectiveness with which organizations are able to modify their strategies, processes, and structures. The action research model, the positive model, and Lewin’s change model indicate the stages of organizational change. This study examined the three stages of Lewin’s model: unfreezing, movement, and refreezing. Although this model establishes general steps, additional information must be considered to adapt these steps to specific situations. This article presents a critical review of change theories for different stages of organizational change. In this critical review, change management offers a constructive framework for managing organizational change throughout different stages of the process. This review has theoretical and practical implications, which are discussed in this article. Immunity to change is also discussed. © 2016 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Introduction and research questions The purpose of the study is to craft the relation between process model and change, this relation describes the ways of implementing change process by leader’s knowledge sharing, and this sharing identifies the stages of change process, and these stages delineate the functional significance between organizational change and change implementation. The organizational life has been made inevitable feature by global, technological and economic pace, and many models of organizational change have acknowledged the influence of implicit dimensions at one stage or more stages of organizational change process (Burke, 2008; Wilkins & Dyer, 1988), and these models imitate different granular levels affecting the process of organizational change, and each level of them identifies distinctive change implementation stages (By, 2005). A model of organizational change in Kurt Lewin’s three steps change process context was introduced in this study; which reflects momentous stages in change implementation process. Kurt Lewin’s model is the early fundamental planned change models explaining the striving forces to maintain the status quo and pushing for change (Lewin, 1947). To change the “quasi-stationary equilibrium” stage, ∗ Corresponding author. E-mail address: talib 14@yahoo.com (S.T. Hussain). one may increase the striving forces for change, or decrease the forces maintaining the status quo, or the combination of both forces for proactive and reactive organizational change through knowledge sharing of individual willingness with the help of stimulating change leadership style. The Lewin’s model was used from an ethnographic study assumed for the investigation of the Lewin’s model for change development, mediates implementation and leadership initiatives for change in complex organizations. The focus of this research on (i) how Lewin’s change model granulates change, (ii) how knowledge sharing affects the change implementation process, (iii) how employees involve in change and willingness to change, and (iv) how leadership style affects the organizational change process in organization. Model of organizational change",
"title": ""
},
{
"docid": "0e16b00e2d9059f3b50754fa8c07cc9d",
"text": "It is a combination of three components: 1) a collection of data structure types (the building blocks of any database that conforms to the model);\n 2) a collection of operators or inferencing rules, which can be applied to any valid instances of the data types listed in (1), to retrieve or derive data from any parts of those structures in any combinations desired;\n 3) a collection of general integrity rules, which implicitly or explicitly define the set of consistent database states or changes of state or both—these rules may sometimes be expressed as insert-update-delete rules.",
"title": ""
},
{
"docid": "bdf3417010f59745e4aaa1d47b71c70e",
"text": "Recent studies witness the success of Bag-of-Features (BoF) frameworks for video based human action recognition. The detection and description of local interest regions are two fundamental problems in BoF framework. In this paper, we propose a motion boundary based sampling strategy and spatialtemporal (3D) co-occurrence descriptors for action video representation and recognition. Our sampling strategy is partly inspired by the recent success of dense trajectory (DT) based features [1] for action recognition. Compared with DT, we densely sample spatial-temporal cuboids along motion boundary which can greatly reduce the number of valid trajectories while preserve the discriminative power. Moreover, we develop a set of 3D co-occurrence descriptors which take account of the spatial-temporal context within local cuboids and deliver rich information for recognition. Furthermore, we decompose each 3D co-occurrence descriptor at pixel level and bin level and integrate the decomposed components with a multi-channel framework, which can improve the performance significantly. To evaluate the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-ofthe-art methods. We report 95.6% on KTH, 87.6% on YouTube and 51.8% on HMDB51.",
"title": ""
},
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "84a7592ccf4c79cb5cb4ed7dbbcc1af7",
"text": "AIM\nTo examine the relationships between workplace bullying, destructive leadership and team conflict, and physical health, strain, self-reported performance and intentions to quit among veterinarians in New Zealand, and how these relationships could be moderated by psychological capital and perceived organisational support.\n\n\nMETHODS\nData were collected by means of an online survey, distributed to members of the New Zealand Veterinary Association. Participation was voluntary and all responses were anonymous and confidential. Scores for the variables measured were based on responses to questions or statements with responses categorised on a linear scale. A series of regression analyses were used to assess mediation or moderation by intermediate variables on the relationships between predictor variables and dependent variables.\n\n\nRESULTS\nCompleted surveys were provided by 197 veterinarians, of which 32 (16.2%) had been bullied at work, i.e. they had experienced two or more negative acts at least weekly over the previous 6 months, and nine (4.6%) had experienced cyber-bullying. Mean scores for workplace bullying were higher for female than male respondents, and for non-managers than managers (p<0.01). Scores for workplace bullying were positively associated with scores for destructive leadership and team conflict, physical health, strain, and intentions to quit (p<0.001). Workplace bullying and team conflict mediated the relationship between destructive leadership and strain, physical health and intentions to quit. Perceived organisational support moderated the effects of workplace bullying on strain and self-reported job performance (p<0.05).\n\n\nCONCLUSIONS\nRelatively high rates of negative behaviour were reported by veterinarians in this study, with 16% of participants meeting an established criterion for having been bullied. The negative effects of destructive leadership on strain, physical health and intentions to quit were mediated by team conflict and workplace bullying. It should be noted that the findings of this study were based on a survey of self-selected participants and the findings may not represent the wider population of New Zealand veterinarians.",
"title": ""
},
{
"docid": "0368698acbd67accbb06e9a6d2559985",
"text": "Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%.",
"title": ""
},
{
"docid": "8aafa283b228bbaa7ff3e37e7ca0a861",
"text": "In order to meet the continuously increasing demands for high throughput in wireless networks, IEEE 802 LAN/MAN Standard Committee is developing IEEE 802.11ax: a new amendment for the Wi-Fi standard. This amendment provides various ways to improve the efficiency of Wi-Fi. The most revolutionary one is OFDMA. Apart from obvious advantages, such as decreasing overhead for short packet transmission at high rates and improving robustness to frequency selective interference, being used for uplink transmission, OFDMA can increase power spectral density and, consequently, user data rates. However, the gain of OFDMA mainly depends on the resource scheduling between users. The peculiarities of OFDMA implementation in Wi-Fi completely change properties of classic schedulers used in other OFDMA systems, e.g. LTE. In the paper, we consider the usage of OFDMA in Wi-Fi for uplink transmission. We study peculiarities of OFDMA in Wi-Fi, adapt classic schedulers to Wi-Fi, explaining why they do not perform well. Finally we develop a novel scheduler, MUTAX, and evaluate its performance with simulation.",
"title": ""
},
{
"docid": "4dd403bbecb8d03ebdd8de9923ee629b",
"text": "Phishing is a major problem on the Web. Despite the significant attention it has received over the years, there has been no definitive solution. While the state-of-the-art solutions have reasonably good performance, they require a large amount of training data and are not adept at detecting phishing attacks against new targets. In this paper, we begin with two core observations: (a) although phishers try to make a phishing webpage look similar to its target, they do not have unlimited freedom in structuring the phishing webpage, and (b) a webpage can be characterized by a small set of key terms, how these key terms are used in different parts of a webpage is different in the case of legitimate and phishing webpages. Based on these observations, we develop a phishing detection system with several notable properties: it requires very little training data, scales well to much larger test data, is language-independent, fast, resilient to adaptive attacks and implemented entirely on client-side. In addition, we developed a target identification component that can identify the target website that a phishing webpage is attempting to mimic. The target detection component is faster than previously reported systems and can help minimize false positives in our phishing detection system.",
"title": ""
},
{
"docid": "0bd7956dbee066a5b7daf4cbd5926f35",
"text": "Computer networks lack a general control paradigm, as traditional networks do not provide any networkwide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.",
"title": ""
},
{
"docid": "6b73e2bf2c8de87e9ab749b1d72d3515",
"text": "We present a robust framework for estimating non-rigid 3D shape and motion in video sequences. Given an input video sequence, and a user-specified region to reconstruct, the algorithm automatically solves for the 3D time-varying shape and motion of the object, and estimates which pixels are outliers, while learning all system parameters, including a PDF over non-rigid deformations. There are no user-tuned parameters (other than initialization); all parameters are learned by maximizing the likelihood of the entire image stream. We apply our method to both rigid and non-rigid shape reconstruction, and demonstrate it in challenging cases of occlusion and variable illumination.",
"title": ""
},
{
"docid": "f47ff71a0fb0363c5c27d2579ee1961a",
"text": "The advent of 4G LTE has ushered in a growing demand for embedded antennas that can cover a wide range of frequency bands from 698 MHz to 2.69 GHz. A novel active antenna design is presented in this paper that is capable of covering a wide range of LTE bands while being constrained to a 1.8 cm3 volume. The antenna structure utilizes Ethertronics EtherChip 2.0 to add tunability to the antenna structure. The paper details the motivation behind developing the antenna and further discusses the fabrication of the active antenna architecture on an evaluation board and presents the measured results.",
"title": ""
},
{
"docid": "44f1016cb2dfebbb8500a35985dddac0",
"text": "Classification of entities based on the underlying network structure is an important problem. Networks encountered in practice are sparse and have many missing and noisy links. Statistical learning techniques have been used in intra-network classification; however, they typically exploit only the local neighborhood, so may not perform well. In this paper, we propose a novel structural neighborhood-based classifier learning using a random walk. For classifying a node, we take a random walk from the node and make a decision based on how nodes in the respective k^th-level neighborhood are labeled. We observe that random walks of short length are helpful in classification. Emphasizing role of longer random walks may cause the underlying Markov chain to converge to a stationary distribution. Considering this, we take a lazy random walk based approach with variable termination probability for each node, based on the node's structural properties including its degree. Our experimental study on real world datasets demonstrates the superiority of the proposed approach over the existing state-of-the-art approaches.",
"title": ""
},
{
"docid": "c62a2280367b4d7c6a715c92a9696bae",
"text": "OBJECTIVES\nPain assessment is essential to tailor intensive care of neonates. The present focus is on acute procedural pain; assessment of pain of longer duration remains a challenge. We therefore tested a modified version of the COMFORT-behavior scale-named COMFORTneo-for its psychometric qualities in the Neonatal Intensive Care Unit setting.\n\n\nMETHODS\nIn a clinical observational study, nurses assessed patients with COMFORTneo and Numeric Rating Scales (NRS) for pain and distress, respectively. Interrater reliability, concurrent validity, and sensitivity to change were calculated as well as sensitivity and specificity for different cut-off scores for subsets of patients.\n\n\nRESULTS\nInterrater reliability was good: median linearly weighted Cohen kappa 0.79. Almost 3600 triple ratings were obtained for 286 neonates. Internal consistency was good (Cronbach alpha 0.84 and 0.88). Concurrent validity was demonstrated by adequate and good correlations, respectively, with NRS-pain and NRS-distress: r=0.52 (95% confidence interval 0.44-0.59) and r=0.70 (95% confidence interval 0.64-0.75). COMFORTneo cut-off scores of 14 or higher (score range is 6 to 30) had good sensitivity and specificity (0.81 and 0.90, respectively) using NRS-pain or NRS-distress scores of 4 or higher as criterion.\n\n\nDISCUSSION\nThe COMFORTneo showed preliminary reliability. No major differences were found in cut-off values for low birth weight, small for gestational age, neurologic impairment risk levels, or sex. Multicenter studies should focus on establishing concurrent validity with other instruments in a patient group with a high probability of ongoing pain.",
"title": ""
},
{
"docid": "8f83c7efb262f996f67424412f6b2ddb",
"text": "Apache ZooKeeper is a distributed data storage that is highly concurrent and asynchronous due to network communication, testing such a system is very challenging. Our solution using the tool \"Modbat\" generates test cases for concurrent client sessions, and processes results from synchronous and asynchronous callbacks. We use an embedded model checker to compute the test oracle for non-deterministic outcomes, the oracle model evolves dynamically with each new test step. Our work has detected multiple previously unknown defects in ZooKeeper. Finally, a thorough coverage evaluation of the core classes show how code and branch coverage strongly relate to feature coverage in the model, and hence modeling effort.",
"title": ""
},
{
"docid": "a6dff88ee5b1bfa2c7a4db85cd052815",
"text": "OBJECTIVE\nTo determine the effectiveness of 3-dimensional therapy in the treatment of adolescent idiopathic scoliosis.\n\n\nMETHODS\nWe carried out this study with 50 patients whose average age was 14.15 +/-1.69 years at the Physical Therapy and Rehabilitation School, Hacettepe University, Ankara, Turkey, from 1999 to 2004. We treated them as outpatients, 5 days a week, in a 4-hour program for the first 6 weeks. After that, they continued with the same program at home. We evaluated the Cobb angle, vital capacity and muscle strength of the patients before treatment, and after 6 weeks, 6 months and one year, and compared all the results.\n\n\nRESULTS\nThe average Cobb angle, which was 26.10 degrees on average before treatment, was 23.45 degrees after 6 weeks, 19.25 degrees after 6 months and 17.85 degrees after one year (p<0.01). The vital capacities, which were on average 2795 ml before treatment, reached 2956 ml after 6 weeks, 3125 ml after 6 months and 3215 ml after one year (p<0.01). Similarly, according to the results of evaluations after 6 weeks, 6 months and one year, we observed an increase in muscle strength and recovery of the postural defects in all patients (p<0.01).\n\n\nCONCLUSION\nSchroth`s technique positively influenced the Cobb angle, vital capacity, strength and postural defects in outpatient adolescents.",
"title": ""
},
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
},
{
"docid": "9e2db834da4eb5d226afec4f8dd58c4c",
"text": "This paper introduces a new hand gesture recognition technique to recognize Arabic sign language alphabet and converts it into voice correspondences to enable Arabian deaf people to interact with normal people. The proposed technique captures a color image for the hand gesture and converts it into YCbCr color space that provides an efficient and accurate way to extract skin regions from colored images under various illumination changes. Prewitt edge detector is used to extract the edges of the segmented hand gesture. Principal Component Analysis algorithm is applied to the extracted edges to form the predefined feature vectors for signs and gestures library. The Euclidean distance is used to measure the similarity between the signs feature vectors. The nearest sign is selected and the corresponding sound clip is played. The proposed technique is used to recognize Arabic sign language alphabets and the most common Arabic gestures. Specifically, we applied the technique to more than 150 signs and gestures with accuracy near to 97% at real time test for three different signers. The detailed of the proposed technique and the experimental results are discussed in this paper.",
"title": ""
},
{
"docid": "67c74094c42c06d88401ae81b1429956",
"text": "Research, first published over a decade ago, has shown that every 10% increase in the number of registered nurses (RNs) educated with the Bachelor of Science in Nursing (BSN) in hospital staff is associated with a 4 % decrease in the risk of death for patients.' Nurse staffs with higher proportions of BSN and Master of Science in Nursing (MSN) prepared nurses demonstrate increased productivity and better patient outcomes.^-^''''^' ' Therefore, in 2008 the American Nurses Association (ANA) House of Delegates resolved to support initiatives that require new diploma and associate degree (AD) prepared RNs to complete the BSN within ten years after initial licensure, exempting those individuals who are already licensed or enrolled as students in diploma or AD programs when legislation is enacted.' The Ohio Nurses Association (ONA) adopted this resolution in 2009 and the Ohio State Nursing Students'Association (OSNA) has endorsed the BSN in Ten initiative.",
"title": ""
},
{
"docid": "db2ebec1eeec213a867b10fe9550bfc7",
"text": "Photovoltaic method is very popular for generating electrical power. Its energy production depends on solar radiation on that location and orientation. Shadow rapidly decreases performance of the Photovoltaic system. In this research, it is being investigated that how exactly real-time shadow can be detected. In principle, 3D city models containing roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. An automated procedure to measure exact shadow effect from the 3D city models and a long-term simulation model to determine the produced energy from the photovoltaic system is being developed here. In this paper, a method for detecting shadow for direct radiation has been discussed with its result using a 3D city model to perform a solar energy potentiality analysis. Figure 1. Partial Shadow on PV array (Reisa 2011). Former military area Scharnhauser Park shown in figure 2 has been choosen as the case study area for this research. It is an urban conversion and development area of 150 hecta res in the community of Ostfildern on the southern border near Stuttgart with 7000 inhabitants. About 80% heating energy demand of the whole area is supplied by renewable energies and a small portion of electricity is delivered by existing roof top photovoltaic system (Tereci et al, 2009). This has been selected as the study area for this research because of availability CityGML and LIDAR data, building footprints and existing photovoltaic cells on roofs and façades. Land Survey Office Baden-Wüttemberg provides the laser scanning data with a density of 4 points per square meter at a high resolution of 0.2 meter. The paper has been organized with a brief introduction at the beginning explaining background of photovoltaic energy and motivation for this research in. Then the effect of shadow on photovoltaic cells and a methodology for detecting shadow from direct radiation. Then result has been shown applying the methodology and some brief idea about the future work of this research has been presented.",
"title": ""
}
] |
scidocsrr
|
75c4d8c3856225f755041b9a6d7e763a
|
Improving Knowledge Distillation with Supporting Adversarial Samples
|
[
{
"docid": "1f46ea05e58da0885805247a1f107f83",
"text": "Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer.",
"title": ""
},
{
"docid": "0a8c009d1bccbaa078f95cc601010af3",
"text": "Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.\n In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.",
"title": ""
},
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] |
[
{
"docid": "34f0a6e303055fc9cdefa52645c27ed5",
"text": "Purpose – The purpose of this paper is to identify the factors that influence people to play socially interactive games on mobile devices. Based on network externalities and theory of uses and gratifications (U&G), it seeks to provide direction for further academic research on this timely topic. Design/methodology/approach – Based on 237 valid responses collected from online questionnaires, structural equation modeling technology was employed to examine the research model. Findings – The results reveal that both network externalities and individual gratifications significantly influence the intention to play social games on mobile devices. Time flexibility, however, which is one of the mobile device features, appears to contribute relatively little to the intention to play mobile social games. Originality/value – This research successfully applies a combination of network externalities theory and U&G theory to investigate the antecedents of players’ intentions to play mobile social games. This study is able to provide a better understanding of how two dimensions – perceived number of users/peers and individual gratification – influence mobile game playing, an insight that has not been examined previously in the mobile apps literature.",
"title": ""
},
{
"docid": "c6daad10814bafb3453b12cfac30b788",
"text": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MSCOCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.",
"title": ""
},
{
"docid": "006ea5f44521c42ec513edc1cbff1c43",
"text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.",
"title": ""
},
{
"docid": "a55422a96369797c7d42cb77dc99c6dc",
"text": "In order to store massive image data in real-time system, a high performance Serial Advanced Technology Attachment[1] (SATA) controller is proposed in this paper. RocketIO GTX transceiver[2] realizes physical layer of SATA protocol. Link layer and transport layers are implemented in VHDL with programmable logic resources. Application layer is developed on POWERPC440 embedded in Xilinx Virtex-5 FPGA. The whole SATA protocol implement in a platform FPGA has better features in expansibility, scalability, improvability and in-system programmability comparing with realizing it using Application Specific Integrated Circuit (ASIC). The experiment results shown that the controller works accurately and stably and the maximal sustained orderly data transfer rate up to 110 MB/s when connect to SATA hard disk. The high performance of the host SATA controller makes it possible that cheap SATA hard disk instead expensive Small Computer System Interface (SCSI) hard disk in some application. The controller is very suited for high speed mass data storage in embedded system.",
"title": ""
},
{
"docid": "5c46e5fc52797636bf389c8196deea86",
"text": "An efficient single-phase Transformerless grid-connected voltage source inverter topology by using the proposed active virtual ground (AVG) technique is presented. With the AVG, the conventional output L filter can be reconfigured to LCL structure without adding additional inductor. High-frequency differential mode current ripple can be significantly suppressed comparing to the available single-phase grid-connected inverter topologies. Additionally, strong attenuation to the high-frequency common-mode current is achieved. It is particularly important for some applications such as photovoltaic and motor drives. High efficiency can be achieved due to fewer components involved in the conduction loss. Cost of the magnetic device can be reduced since the required inductance of the filter becomes smaller. Performance of the proposed inverter has been evaluated analytically. Experimental verification is performed on a 1-kW, 400-V input, and 110-V/60-Hz output prototype.",
"title": ""
},
{
"docid": "c56c45405e0a943e63ab035b11b9fd93",
"text": "We present a simple, but expressive type system that supports strong updates—updating a memory cell to hold values of unrelated types at different points in time. Our formulation is based upon a standard linear lambda calculus and, as a result, enjoys a simple semantic interpretation for types that is closely related to models for spatial logics. The typing interpretation is strong enough that, in spite of the fact that our core programming language supports shared, mutable references and cyclic graphs, every well-typed program terminates. We then consider extensions needed to model ML-style references, where the capability to access a reference cell is unrestricted, but strong updates are disallowed. Our extensions include a thaw primitive for re-gaining the capability to perform strong updates on unrestricted references. The thaw primitive is closely related to other mechanisms that support strong updates, such as CQUAL’s restrict.",
"title": ""
},
{
"docid": "48ea93efe1a1219bfb1a6b48c20bab99",
"text": "Understanding the content of user's image posts is a particularly interesting problem in social networks and web settings. Current machine learning techniques focus mostly on curated training sets of image-label pairs, and perform image classification given the pixels within the image. In this work we instead leverage the wealth of information available from users: firstly, we employ user hashtags to capture the description of image content; and secondly, we make use of valuable contextual information about the user. We show how user metadata (age, gender, etc.) combined with image features derived from a convolutional neural network can be used to perform hashtag prediction. We explore two ways of combining these heterogeneous features into a learning framework: (i) simple concatenation; and (ii) a 3-way multiplicative gating, where the image model is conditioned on the user metadata. We apply these models to a large dataset of de-identified Facebook posts and demonstrate that modeling the user can significantly improve the tag prediction quality over current state-of-the-art methods.",
"title": ""
},
{
"docid": "db4ea0aca8add80d8674abb2ecf2276f",
"text": "We combine polynomial techniques with some geometric arguments to obtain restrictions of the structure of spherical designs with fixed odd strength and odd cardinality. Our bounds for the extreme inner products of such designs allow us to prove nonexistence results in many cases. Applications are shown for 7-designs. DOI: 10.1134/S0032946009020033",
"title": ""
},
{
"docid": "4324a73e1d771e927632f3089cad3911",
"text": "Generating polygonal maps from RGB-D data is an active field of research in robotic mapping. Kinect Fusion and related algorithms provide means to generate reconstructions of large environments. However, most available implementations generate topological artifacts like redundant vertices and triangles. In this paper we present a novel data structure that allows to generate topologically consistent triangle meshes from RGB-D data without additional filtering.",
"title": ""
},
{
"docid": "fddf65bce6abf403cf4f7d7cfcdd835f",
"text": "Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. While several photorealistic image stylization methods exist, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In this paper, we propose a method to address these issues. The proposed method consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step ensures spatially consistent stylizations. Each of the steps has a closedform solution and can be computed efficiently. We conduct extensive experimental validations. The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster. Source code and additional results are available at https://github.com/NVIDIA/FastPhotoStyle.",
"title": ""
},
{
"docid": "9c832a2f70b4ff39c9572b73e739a409",
"text": "We investigate the behavior of convolutional neural networks (CNN) in the presence of label noise. We show empirically that CNN prediction for a given test sample depends on the labels of the training samples in its local neighborhood. This is similar to the way that the K-nearest neighbors (K-NN) classifier works. With this understanding, we derive an analytical expression for the expected accuracy of a KNN, and hence a CNN, classifier for any level of noise. In particular, we show that K-NN, and CNN, are resistant to label noise that is randomly spread across the training set, but are very sensitive to label noise that is concentrated. Experiments on real datasets validate our analytical expression by showing that they match the empirical results for varying degrees of label noise.",
"title": ""
},
{
"docid": "7085517a7d02d98bd9ab52602e1bd25b",
"text": "Many analysis and modeling problems done today for in mation technology applications lead to the solution of system problems. In the developm ent of these solutions, reasoning is a major component. The reasoning component which is normall y neglected can be captured in Rationale Models. Rationale Models represent the reasoning th at lead to the system solution. This reasoning is defined as Design Rational (DR). There ave been a number of research studies into DR, however, in this research, it was found that in dustry has neglected DR in their system analysis because of the increased time and effort r equired to capture and implement DR. Some of the benefits of DR are: 1) maintenance is more effi ci nt and effective, 2) system scalability is increased, and 3) training of users and developers is easier. This paper proposes a systematic approach to the capture of argumentative DR and an integration of argumentative DR with the Object-Oriented system development lifecycle. Chang e is a constant in the implementation and use of systems, hence, this paper also raises the i su of “how should argumentative DR be stored and integrated with the system to maximize i ts ut lity to the system.",
"title": ""
},
{
"docid": "d414d5c5cfe60ba24b62f2b94fccc973",
"text": "In recent years there have been multiple successful attempts tackling document processing problems separately by designing task specific hand-tuned strategies. We argue that the diversity of historical document processing tasks prohibits to solve them one at a time and shows a need for designing generic approaches in order to handle the variability of historical series. In this paper, we address multiple tasks simultaneously such as page extraction, baseline extraction, layout analysis or multiple typologies of illustrations and photograph extraction. We propose an open-source implementation of a CNN-based pixel-wise predictor coupled with task dependent post-processing blocks. We show that a single CNN-architecture can be used across tasks with competitive results. Moreover most of the task-specific post-precessing steps can be decomposed in a small number of simple and standard reusable operations, adding to the flexibility of our approach.",
"title": ""
},
{
"docid": "60cfdc554e1078263370514ec3f04a90",
"text": "Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequence-to-sequence models for open-domain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.",
"title": ""
},
{
"docid": "2a827ddb30be8cdc3ecaf09da2e898de",
"text": "There is an increasing interest on accelerating neural networks for real-time applications. We study the studentteacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The proposed method is particularly effective for relatively small student networks. Moreover, experimental results show the effect of network size when the modern networks are used as student. We empirically study the trade-off between inference time and classification accuracy, and provide suggestions on choosing a proper student network.",
"title": ""
},
{
"docid": "f5ba29303b141801411ae07de79a2afd",
"text": "Information Security has become an important issue in modern world as the popularity and infiltration of internet commerce and communication technologies has emerged, making them a prospective medium to the security threats. To surmount these security threats modern data communications uses cryptography an effective, efficient and essential component for secure transmission of information by implementing security parameter counting Confidentiality, Authentication, accountability, and accuracy. To achieve data security different cryptographic algorithms (Symmetric & Asymmetric) are used that jumbles data in to scribbled format that can only be reversed by the user that have to desire key. This paper presents a comprehensive comparative analysis of different existing cryptographic algorithms (symmetric) based on their Architecture, Scalability, Flexibility, Reliability, Security and Limitation that are essential for secure communication (Wired or Wireless).",
"title": ""
},
{
"docid": "4b2e6f5a0ce30428377df72d8350d637",
"text": "Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.",
"title": ""
},
{
"docid": "ea525c15c1cbb4a4a716e897287fd770",
"text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I” “R” in the low-rated groups but “R” “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.",
"title": ""
},
{
"docid": "fc2a0f6979c2520cee8f6e75c39790a8",
"text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"title": ""
},
{
"docid": "be079999e630df22254e7aa8a9ecdcae",
"text": "Strokes are one of the leading causes of death and disability in the UK. There are two main types of stroke: ischemic and hemorrhagic, with the majority of stroke patients suffering from the former. During an ischemic stroke, parts of the brain lose blood supply, and if not treated immediately, can lead to irreversible tissue damage and even death. Ischemic lesions can be detected by diffusion weighted magnetic resonance imaging (DWI), but localising and quantifying these lesions can be a time consuming task for clinicians. Work has already been done in training neural networks to segment these lesions, but these frameworks require a large amount of manually segmented 3D images, which are very time consuming to create. We instead propose to use past examinations of stroke patients which consist of DWIs, corresponding radiological reports and diagnoses in order to develop a learning framework capable of localising lesions. This is motivated by the fact that the reports summarise the presence, type and location of the ischemic lesion for each patient, and thereby provide more context than a single diagnostic label. Acute lesions prediction is aided by an attention mechanism which implicitly learns which regions within the DWI are most relevant to the classification.",
"title": ""
}
] |
scidocsrr
|
008e4caf64e9d155ec29e8b7ce4f2aaf
|
Effective summarization method of text documents
|
[
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
}
] |
[
{
"docid": "f66609f826cae05b1b330f138c6e556a",
"text": "We describe pke, an open source python-based keyphrase extraction toolkit. It provides an end-to-end keyphrase extraction pipeline in which each component can be easily modified or extented to develop new approaches. pke also allows for easy benchmarking of state-of-the-art keyphrase extraction approaches, and ships with supervised models trained on the SemEval-2010 dataset (Kim et al., 2010).",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "c694936a9b8f13654d06b72c077ed8f4",
"text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.",
"title": ""
},
{
"docid": "5bbd4675eb1b408895f29340c3cd074a",
"text": "We performed underground real-time tests to obtain alpha particle-induced soft error rates (α-SER) with high accuracies for SRAMs with 180 nm – 90 nm technologies and studied the scaling trend of α-SERs. In order to estimate the maximum permissive rate of alpha emission from package resin, the α-SER was compared to the neutron-induced soft error rate (n-SER) obtained from accelerated tests. We found that as devices are scaled down, the α-SER increased while the n-SER slightly decreased, and that the α-SER could be greater than the n-SER in 90 nm technology even when the ultra-low-alpha (ULA) grade, with the alpha emission rate ≫ 1 × 10<sup>−3</sup> cm<sup>−2</sup>h<sup>−1</sup>, was used for package resin. We also performed computer simulations to estimate scaling trends of both α-SER and n-SER up to 45 nm technologies, and noticed that the α-SER decreased from 65 nm technology while the n-SER increased from 45 nm technology due to direct ionization from the protons generated in the n + Si nuclear reaction.",
"title": ""
},
{
"docid": "de38fa4dc01bd1ef779f377cfcbc52f7",
"text": "Like all software, mobile applications (\"apps\") must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: ease of use, ability to work on multiple platforms, code coverage, and ability to detect faults. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android.",
"title": ""
},
{
"docid": "7240d65e0bc849a569d840a461157b2c",
"text": "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.",
"title": ""
},
{
"docid": "884c269755bb19bd92e1add39156914a",
"text": "Stress is a well-known risk factor in the development of addiction and in addiction relapse vulnerability. A series of population-based and epidemiological studies have identified specific stressors and individual-level variables that are predictive of substance use and abuse. Preclinical research also shows that stress exposure enhances drug self-administration and reinstates drug seeking in drug-experienced animals. The deleterious effects of early life stress, child maltreatment, and accumulated adversity on alterations in the corticotropin releasing factor and hypothalamic-pituitary-adrenal axis (CRF/HPA), the extrahypothalamic CRF, the autonomic arousal, and the central noradrenergic systems are also presented. The effects of these alterations on the corticostriatal-limbic motivational, learning, and adaptation systems that include mesolimbic dopamine, glutamate, and gamma-amino-butyric acid (GABA) pathways are discussed as the underlying pathophysiology associated with stress-related risk of addiction. The effects of regular and chronic drug use on alterations in these stress and motivational systems are also reviewed, with specific attention to the impact of these adaptations on stress regulation, impulse control, and perpetuation of compulsive drug seeking and relapse susceptibility. Finally, research gaps in furthering our understanding of the association between stress and addiction are presented, with the hope that addressing these unanswered questions will significantly influence new prevention and treatment strategies to address vulnerability to addiction.",
"title": ""
},
{
"docid": "5350af2d42f9321338e63666dcd42343",
"text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.",
"title": ""
},
{
"docid": "f99fe9c7aaf417a3893c264b2602a9f3",
"text": "A male infant was brought to hospital aged eight weeks. He was born at full term via normal vaginal home delivery without any complications. The delivery was conducted by a traditional birth attendant and Apgar scores at birth were unrecorded. One week after the birth, the parents noticed an increase in size of the baby’s breasts. In accordance with cultural practice, they massaged the breasts in order to express milk, hoping that by doing so the size of the breasts would return to normal. However, the size of the breasts increased. They also reported that milk was being discharged spontaneously through the nipples. There was no history of drug intake neither by the mother nor the baby. The infant appeared clinically well and showed no signs of irritability. On examination, bilateral breast enlargement was observed of approximate diameter 6 cm. No tenderness, purulent discharge or any sign of inflammation were observed (Figure 1). Systemic and genital examination were unremarkable. Routine blood investigations were normal. Firm advice was given not to massage the breasts of the baby.",
"title": ""
},
{
"docid": "82857fedec78e8317498e3c66268d965",
"text": "In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.",
"title": ""
},
{
"docid": "18f877aff5ed5cc5711d92089e4c8d3e",
"text": "The purpose of this paper is twofold: ( i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and ( ii) we argue that natural language, which is the best known theory of our (shared) commo nsense knowledge, should itself be used as a guide to discovering the structure o f commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as me t phor, intensionality, and the semantics of nominal compounds. Admittedl y, our ultimate goal is quite ambitious, and it is no less than the systematic ‘dis overy’ of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra.",
"title": ""
},
{
"docid": "d1ad10c873fd5a02d1ce072b4ffc788c",
"text": "Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.",
"title": ""
},
{
"docid": "5b0530f94f476754034c92292e02b390",
"text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar shashank.shekhar@vanderbilt.edu Hamzah Abdel-Aziz hamzah.abdelaziz@vanderbilt.edu Michael Walker michael.a.walker.1@vanderbilt.edu Faruk Caglar faruk.caglar@vanderbilt.edu Aniruddha Gokhale a.gokhale@vanderbilt.edu Xenofon Koutsoukos xenonfon.koutsoukos@vanderbilt.edu 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two",
"title": ""
},
{
"docid": "aa2ddbfc3bb1aa854d1c576927dc2d30",
"text": "B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.",
"title": ""
},
{
"docid": "e6d5f3c9a58afcceae99ff522d6dfa81",
"text": "Strategic information systems planning (SISP) is a key concern facing top business and information systems executives. Observers have suggested that both too little and too much SISP can prove ineffective. Hypotheses examine the expected relationship between comprehensiveness and effectiveness in five SISP planning phases. They predict a nonlinear, inverted-U relationship thus suggesting the existence of an optimal level of comprehensiveness. A survey collected data from 161 US information systems executives. After an extensive validation of the constructs, the statistical analysis supported the hypothesis in a Strategy Implementation Planning phase, but not in terms of the other four SISP phases. Managers may benefit from the knowledge that both too much and too little implementation planning may hinder SISP success. Future researchers should investigate why the hypothesis was supported for that phase, but not the others. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4d1dfdfa04b60f1e649d5f234e8b417f",
"text": "One way hash functions are a major tool in cryptography. DES is the best known and most widely used encryption function in the commercial world today. Generating a one-way hash function which is secure if DES is a “good” block cipher would therefore be useful. We show three such functions which are secure if DES is a good random block cipher.",
"title": ""
},
{
"docid": "1eb415cae9b39655849537cdc007f51f",
"text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.",
"title": ""
},
{
"docid": "959ba9c0929e36a8ef4a22a455ed947a",
"text": "The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.",
"title": ""
},
{
"docid": "42e7083e287bebc0a8bde367e4d4b352",
"text": "This paper proposes a framework for security services using Software-Defined Networking (SDN) and Interface to Network Security Functions (I2NSF). It specifies requirements for such a framework for security services based on network virtualization. It describes two representative security systems, such as (i) centralized firewall system and (ii) DDoS-attack mitigation system. For each service, this paper discusses the limitations of existing systems and presents a possible SDN-based system to protect network resources by controlling suspicious and dangerous network traffic.",
"title": ""
}
] |
scidocsrr
|
7eef08a056a8837dc6ac34c2bc28d054
|
A Survey of the Stream Processing Landscape
|
[
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "b206a5f5459924381ef6c46f692c7052",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
},
{
"docid": "0de1e9759b4c088a15d84a108ba21c33",
"text": "MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework’s fault-tolerance guarantees. This paper describes MillWheel’s programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel’s features are used. MillWheel’s programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel’s unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google.",
"title": ""
}
] |
[
{
"docid": "b7f85441bc39452f7f128d93ec823eb9",
"text": "We investigate the problem of learning to rank with document retrieval from the perspective of learning for multiple objective functions. We present solutions to two open problems in learning to rank: first, we show how multiple measures can be combined into a single graded measure that can be learned. This solves the problem of learning from a 'scorecard' of measures by making such scorecards comparable, and we show results where a standard web relevance measure (NDCG) is used for the top-tier measure, and a relevance measure derived from click data is used for the second-tier measure; the second-tier measure is shown to significantly improve while leaving the top-tier measure largely unchanged. Second, we note that the learning-to-rank problem can itself be viewed as changing as the ranking model learns: for example, early in learning, adjusting the rank of all documents can be advantageous, but later during training, it becomes more desirable to concentrate on correcting the top few documents for each query. We show how an analysis of these problems leads to an improved, iteration-dependent cost function that interpolates between a cost function that is more appropriate for early learning, with one that is more appropriate for late-stage learning. The approach results in a significant improvement in accuracy with the same size models. We investigate these ideas using LambdaMART, a state-of-the-art ranking algorithm.",
"title": ""
},
{
"docid": "669962069f0e6ce1a0edafe8e81197bc",
"text": "Associations of Machiavellianism (Mach) with self-report and performance emotional intelligence (EI) and with personality were examined. The possible existence of an emotional manipulation capability, not covered within current EI measures, was also examined by constructing an emotional manipulation scale. Mach was found to be negatively correlated with self-report and performance EI, and also with Agreeableness and Conscientiousness. Emotional manipulation was positively correlated with Mach but unrelated to EI. Thus high Machs endorse emotionally-manipulative behaviour, although the extent to which they are successful in this behaviour, given the negative Mach/EI association, remains to be established. 2006 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "fbb164c5c0b4db853b92e0919c260331",
"text": "The dielectric properties of tissues have been extracted from the literature of the past five decades and presented in a graphical format. The purpose is to assess the current state of knowledge, expose the gaps there are and provide a basis for the evaluation and analysis of corresponding data from an on-going measurement programme.",
"title": ""
},
{
"docid": "9d979b8cf09dd54b28e314e2846f02a6",
"text": "Purpose – The objective of this paper is to analyse whether individuals’ socioeconomic characteristics – age, gender and income – influence their online shopping behaviour. The individuals analysed are experienced e-shoppers i.e. individuals who often make purchases on the internet. Design/methodology/approach – The technology acceptance model was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behaviour of e-shoppers are based on their own experiences. The information obtained has been tested using causal and multi-sample analyses. Findings – The results show that socioeconomic variables moderate neither the influence of previous use of the internet nor the perceptions of e-commerce; in short, they do not condition the behaviour of the experienced e-shopper. Practical implications – The results obtained help to determine that once individuals attain the status of experienced e-shoppers their behaviour is similar, independently of their socioeconomic characteristics. The internet has become a marketplace suitable for all ages and incomes and both genders, and thus the prejudices linked to the advisability of selling certain products should be revised. Originality/value – Previous research related to the socioeconomic variables affecting e-commerce has been aimed at forecasting who is likely to make an initial online purchase. In contrast to the majority of existing studies, it is considered that the current development of the online environment should lead to analysis of a new kind of e-shopper (experienced purchaser), whose behaviour differs from that studied at the outset of this research field. The experience acquired with online shopping nullifies the importance of socioeconomic characteristics.",
"title": ""
},
{
"docid": "db2cd0762b560faf3aaf5e27ad3e13a1",
"text": "Soil is an excellent niche of growth of many microorganisms: protozoa, fungi, viruses, and bacteria. Some microorganisms are able to colonize soil surrounding plant roots, the rhizosphere, making them come under the influence of plant roots (Hiltner 1904; Kennedy 2005). These bacteria are named rhizobacteria. Rhizobacteria are rhizosphere competent bacteria able to multiply and colonize plant roots at all stages of plant growth, in the presence of a competing microflora (Antoun and Kloepper 2001) where they are in contact with other microorganisms. This condition is wildly encountered in natural, non-autoclaved soils. Generally, interactions between plants and microorganisms can be classified as pathogenic, saprophytic, and beneficial (Lynch 1990). Beneficial interactions involve plant growth promoting rhizobacteria (PGPR), generally refers to a group of soil and rhizosphere free-living bacteria colonizing roots in a competitive environment and exerting a beneficial effect on plant growth (Kloepper and Schroth 1978; Lazarovits and Nowak 1997; Kloepper et al. 1989; Kloepper 2003; Bakker et al. 2007). However, numerous researchers tend to enlarge this restrictive definition of rhizobacteria as any root-colonizing bacteria and consider endophytic bacteria in symbiotic association: Rhizobia with legumes and the actinomycete Frankia associated with some phanerogams as PGPR genera. Among PGPRs are representatives of the following genera: Acinetobacter, Agrobacterium, Arthrobacter, Azoarcus, Azospirillum, Azotobacter, Bacillus, Burkholderia, Enterobacter, Klebsiella, Pseudomonas, Rhizobium, Serratia, and Thiobacillus. Some of these genera such as Azoarcus spp., Herbaspirillum, and Burkholderia include endophytic species.",
"title": ""
},
{
"docid": "288f32db8af5789e6e6049fa4cec0334",
"text": "Trusted execution environments, and particularly the Software Guard eXtensions (SGX) included in recent Intel x86 processors, gained significant traction in recent years. A long track of research papers, and increasingly also realworld industry applications, take advantage of the strong hardware-enforced confidentiality and integrity guarantees provided by Intel SGX. Ultimately, enclaved execution holds the compelling potential of securely offloading sensitive computations to untrusted remote platforms. We present Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations. Crucially, unlike previous SGX attacks, we do not make any assumptions on the victim enclave’s code and do not necessarily require kernel-level access. At its core, Foreshadow abuses a speculative execution bug in modern Intel processors, on top of which we develop a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache. We demonstrate our attacks by extracting full cryptographic keys from Intel’s vetted architectural enclaves, and validate their correctness by launching rogue production enclaves and forging arbitrary local and remote attestation responses. The extracted remote attestation keys affect millions of devices.",
"title": ""
},
{
"docid": "f1018166da0922b5428bd1b37e2120ee",
"text": "In many water distribution systems, a significant amount of water is lost because of leakage during transit from the water treatment plant to consumers. As a result, water leakage detection and localization have been a consistent focus of research. Typically, diagnosis or detection systems based on sensor signals incur significant computational and time costs, whereas the system performance depends on the features selected as input to the classifier. In this paper, to solve this problem, we propose a novel, fast, and accurate water leakage detection system with an adaptive design that fuses a one-dimensional convolutional neural network and a support vector machine. We also propose a graph-based localization algorithm to determine the leakage location. An actual water pipeline network is represented by a graph network and it is assumed that leakage events occur at virtual points on the graph. The leakage location at which costs are minimized is estimated by comparing the actual measured signals with the virtually generated signals. The performance was validated on a wireless sensor network based test bed, deployed on an actual WDS. Our proposed methods achieved 99.3% leakage detection accuracy and a localization error of less than 3 m.",
"title": ""
},
{
"docid": "c187a6ad17503d269fe4c3a03fc4fd89",
"text": "Despite the widespread support for live migration of Virtual Machines (VMs) in current hypervisors, these have significant shortcomings when it comes to migration of certain types of VMs. More specifically, with existing algorithms, there is a high risk of service interruption when migrating VMs with high workloads and/or over low-bandwidth networks. In these cases, VM memory pages are dirtied faster than they can be transferred over the network, which leads to extended migration downtime. In this contribution, we study the application of delta compression during the transfer of memory pages in order to increase migration throughput and thus reduce downtime. The delta compression live migration algorithm is implemented as a modification to the KVM hypervisor. Its performance is evaluated by migrating VMs running different type of workloads and the evaluation demonstrates a significant decrease in migration downtime in all test cases. In a benchmark scenario the downtime is reduced by a factor of 100. In another scenario a streaming video server is live migrated with no perceivable downtime to the clients while the picture is frozen for eight seconds using standard approaches. Finally, in an enterprise application scenario, the delta compression algorithm successfully live migrates a very large system that fails after migration using the standard algorithm. Finally, we discuss some general effects of delta compression on live migration and analyze when it is beneficial to use this technique.",
"title": ""
},
{
"docid": "838bd8a38f9d67d768a34183c72da07d",
"text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.",
"title": ""
},
{
"docid": "bf8ff16c84997fa12e1ae8bee1000565",
"text": "The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.",
"title": ""
},
{
"docid": "39015b81ea406c59ed38681137e5e18f",
"text": "DNA probes with conjugated minor groove binder (MGB) groups form extremely stable duplexes with single-stranded DNA targets, allowing shorter probes to be used for hybridization based assays. In this paper, sequence specificity of 3'-MGB probes was explored. In comparison with unmodified DNA, MGB probes had higher melting temperature (T(m)) and increased specificity, especially when a mismatch was in the MGB region of the duplex. To exploit these properties, fluorogenic MGB probes were prepared and investigated in the 5'-nuclease PCR assay (real-time PCR assay, TaqMan assay). A 12mer MGB probe had the same T(m)(65 degrees C) as a no-MGB 27mer probe. The fluorogenic MGB probes were more specific for single base mismatches and fluorescence quenching was more efficient, giving increased sensitivity. A/T rich duplexes were stabilized more than G/C rich duplexes, thereby leveling probe T(m)and simplifying design. In summary, MGB probes were more sequence specific than standard DNA probes, especially for single base mismatches at elevated hybridization temperatures.",
"title": ""
},
{
"docid": "864eb2ec039336758f76aa5e8b44cfcf",
"text": "Fish is a valuable source of nutrition, and many people would benefit from eating fish regularly. But some people eat a lot of fish, every day or several meals per week, and thus can run a significant risk of overexposure to methylmercury. Current advice regarding methylmercury from fish consumption is targeted to protect the developing brain and nervous system but adverse health effects are increasingly associated with adult chronic low-level methylmercury exposure. Manifestations of methylmercury poisoning are variable and may be difficult to detect unless one considers this specific diagnosis and does an appropriate test (blood or hair analysis). We provide information to physicians to recognize and prevent overexposure to methylmercury from fish and seafood consumption. Physicians are urged to ask patients if they eat fish: how often, how much, and what kinds. People who eat fish frequently (once a week or more often) and pregnant women are advised to choose low mercury fish.",
"title": ""
},
{
"docid": "8f85901b4577e310036ac7ef8dedc3d5",
"text": "State-of-the-art Chinese word segmentation systems typically exploit supervised models trained on a standard manually-annotated corpus, achieving performances over 95% on a similar standard testing corpus. However, the performances may drop significantly when the same models are applied onto Chinese microtext. One major challenge is the issue of informal words in the microtext. Previous studies show that informal word detection can be helpful for microtext processing. In this work, we investigate it under the neural setting, by proposing a joint segmentation model that integrates the detection of informal words simultaneously. In addition, we generate training corpus for the joint model by using existing corpus automatically. Experimental results show that the proposed model is highly effective for segmentation of Chinese microtext.",
"title": ""
},
{
"docid": "8400fd3ffa3cdfd54e92370b8627c7e8",
"text": "A number of computer vision problems such as human age estimation, crowd density estimation and body/face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.",
"title": ""
},
{
"docid": "596fa75533d4d31a49efbeb24f5fa7f0",
"text": "High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",
"title": ""
},
{
"docid": "c7a55c0588c1cdccb5b01193a863eee0",
"text": "Hypothyroidism is a very common, yet often overlooked disease. It can have a myriad of signs and symptoms, and is often nonspecific. Identification requires analysis of thyroid hormones circulating in the bloodstream, and treatment is simply replacement with exogenous hormone, usually levothyroxine (Synthroid). The deadly manifestation of hypothyroidism is myxedema coma. Similarly nonspecific and underrecognized, treatment with exogenous hormone is necessary to decrease the high mortality rate.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
},
{
"docid": "6ea4d639ba9924edf15a262fea0d3dc9",
"text": "Accurate and efficient foreground detection is an important task in video surveillance system. The task becomes more critical when the background scene shows more variations, such as water surface, waving trees, varying illumination conditions, etc. Recently, Robust Principal Components Analysis (RPCA) shows a very nice framework for moving object detection. The background sequence is modeled by a low-dimensional subspace called low-rank matrix and sparse error constitutes the foreground objects. But RPCA presents the limitations of computational complexity and memory storage due to batch optimization methods, as a result it is difficult to apply for real-time system. To handle these challenges, this paper presents a robust foreground detection algorithm via Online Robust PCA (OR-PCA) using image decomposition along with continuous constraint such as Markov Random Field (MRF). OR-PCA with good initialization scheme using image decomposition approach improves the accuracy of foreground detection and the computation time as well. Moreover, solving MRF with graph-cuts exploits structural information using spatial neighborhood system and similarities to further improve the foreground segmentation in highly dynamic backgrounds. Experimental results on challenging datasets such as Wallflower, I2R, BMC 2012 and Change Detection 2014 dataset demonstrate that our proposed scheme significantly outperforms the state of the art approaches and works effectively on a wide range of complex background scenes.",
"title": ""
},
{
"docid": "a2a8228b27b066fca497ddc2fa8b323e",
"text": "Digital Image Processing has found to be useful in many domains. In sports, it can either be used as an analytical tool to determine strategic instances in a game or can be used in the broadcast of video to television viewers. Modern day coverage of sports involves multiple cameras and an array of technologies to support it, since manually going through every video coming to a station would be a near-impossible task, a wide range of Digital Image Processing algorithms are applied to do the same. Highlight Generation and Event Detection are the foremost areas in sports where a multitude of DIP algorithms exist. This study provides an insight into the applications of Digital Image Processing in Sports, concentrating on algorithms related to video broadcast while listing their advantages and drawbacks.",
"title": ""
},
{
"docid": "a3add1c3190decbc773e0d45a0563cab",
"text": "Despite the relatively recent emergence of the Unified Theory of Acceptance and Use of Technology (UTAUT), the originating article has already been cited by a large number of studies, and hence it appears to have become a popular theoretical choice within the field of information system (IS)/information technology (IT) adoption and diffusion. However, as yet there have been no attempts to analyse the reasons for citing the originating article. Such a systematic review of citations may inform researchers and guide appropriate future use of the theory. This paper therefore presents the results of a systematic review of 450 citations of the originating article in an attempt to better understand the reasons for citation, use and adaptations of the theory. Findings revealed that although a large number of studies have cited the originating article since its appearance, only 43 actually utilised the theory or its constructs in their empirical research for examining IS/IT related issues. This chapter also classifies and discusses these citations and explores the limitations of UTAUT use in existing research.",
"title": ""
}
] |
scidocsrr
|
c5a91c3e31de1b8d082bc6e0cc38095f
|
Wideband low-profile circular polarization slot antenna based on metasurface
|
[
{
"docid": "d48529ec9487fab939bc8120c44499d0",
"text": "A new wideband circularly polarized antenna using metasurface superstrate for C-band satellite communication application is proposed in this letter. The proposed antenna consists of a planar slot coupling antenna with an array of metallic rectangular patches that can be viewed as a polarization-dependent metasurface superstrate. The metasurface is utilized to adjust axial ratio (AR) for wideband circular polarization. Furthermore, the proposed antenna has a compact structure with a low profile of 0.07λ0 ( λ0 stands for the free-space wavelength at 5.25 GHz) and ground size of 34.5×28 mm2. Measured results show that the -10-dB impedance bandwidth for the proposed antenna is 33.7% from 4.2 to 5.9 GHz, and 3-dB AR bandwidth is 16.5% from 4.9 to 5.9 GHz with an average gain of 5.8 dBi. The simulated and measured results are in good agreement to verify the good performance of the proposed antenna.",
"title": ""
}
] |
[
{
"docid": "f2af256af6a405a3b223abc5d9a276ac",
"text": "Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.",
"title": ""
},
{
"docid": "b70032a5ca8382ac6853535b499f4937",
"text": "Centroid and spread are commonly used approaches in ranking fuzzy numbers. Some experts rank fuzzy numbers using centroid or spread alone while others tend to integrate them together. Although a lot of methods for ranking fuzzy numbers that are related to both approaches have been presented, there are still limitations whereby the ranking obtained is inconsistent with human intuition. This paper proposes a novel method for ranking fuzzy numbers that integrates the centroid point and the spread approaches and overcomes the limitations and weaknesses of most existing methods. Proves and justifications with regard to the proposed ranking method are also presented. 5",
"title": ""
},
{
"docid": "1997b8a0cac1b3beecfd79b3e206d7e4",
"text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.",
"title": ""
},
{
"docid": "bcd7670066a69ad5603a80b72898a566",
"text": "A broadband circularly polarized (CP) antenna with compact size is proposed. The antenna is composed of a loop feeding structure which provides sequential phase, four driven patches, and four parasitic patches. The driven patches, which are capacitively coupled by the feeding loop, generate one CP mode due to the sequentially rotated structure and four parasitic patches are introduced to produce additional CP mode. By combining with the CP mode of the feeding loop, the axial ratio (AR) bandwidth is greatly broadened. An antenna prototype is fabricated to validate the simulated results. Experimental results show that the antenna achieves a broad impedance bandwidth of 19.5% from 5.13 to 6.24 GHz and a 3-dB AR bandwidth of 12.9% (5.38–6.12 GHz). In addition, the proposed antenna also has a flat gain within the operating frequency band and a compact size of $0.92\\lambda _{0}\\times 0.92\\lambda _{0}\\times 0.028\\lambda _{0}$ at 5.5 GHz.",
"title": ""
},
{
"docid": "1102e06f7dfcb6749e3e01a671501c52",
"text": "Past behavior guides future responses through 2 processes. Well-practiced behaviors in constant contexts recur because the processing that initiates and controls their performance becomes automatic. Frequency of past behavior then reflects habit strength and has a direct effect on future performance. Alternately, when behaviors are not well learned or when they are performed in unstable or difficult contexts, conscious decision making is likely to be necessary to initiate and carry out the behavior. Under these conditions, past behavior (along with attitudes and subjective norms) may contribute to intentions, and behavior is guided by intentions. These relations between past behavior and future behavior are substantiated in a meta-analytic synthesis of prior research on behavior prediction and in a primary research investigation.",
"title": ""
},
{
"docid": "ca7e4eafed84f5dbe5f996ac7c795c91",
"text": "This paper examines the effects of review arousal on perceived helpfulness of online reviews, and on consumers’ emotional responses elicited by the reviews. Drawing on emotion theories in psychology and neuroscience, we focus on four emotions – anger, anxiety, excitement, and enjoyment that are common in the context of online reviews. The effects of the four emotions embedded in online reviews were examined using a controlled experiment. Our preliminary results show that reviews embedded with the four emotions (arousing reviews) are perceived to be more helpful than reviews without the emotions embedded (non-arousing reviews). However, reviews embedded with anxiety and enjoyment (low-arousal reviews) are perceived to be more helpfulness that reviews embedded with anger and excitement (high-arousal reviews). Furthermore, compared to reviews embedded with anger, reviews embedded with anxiety are associated with a higher EEG activity that is generally linked to negative emotions. The results suggest a non-linear relationship between review arousal and perceived helpfulness, which can be explained by the consumers’ emotional responses elicited by the reviews.",
"title": ""
},
{
"docid": "837a68575b84782a252f8bd49ad654a0",
"text": "We explore contemporary, data-driven techniques for solving math word problems over recent large-scale datasets. We show that well-tuned neural equation classifiers can outperform more sophisticated models such as sequence to sequence and self-attention across these datasets. Our error analysis indicates that, while fully data driven models show some promise, semantic and world knowledge is necessary for further advances.",
"title": ""
},
{
"docid": "cf7988eb4042f85dad95245e88848457",
"text": "Purpose – Despite an extensive body of knowledge on the importance of customer orientation in the marketing and management literature, the impact of customer orientation and interactive system infrastructure throughout enterprise networks is not fully understood. The purpose of this paper is to present a model linking customer orientation, interactive system infrastructure, value chain practices, and network performance outcomes. Design/methodology/approach – The prior literature on customer orientation and supply chains is reviewed and a framework is presented which shows the relationship between customer orientation and network performance outcomes, along with other variables. Findings – The conclusion supports the importance of customer orientation in the context of the proposed value chain framework. Research limitations/implications – The framework introduced in this paper provides a review of customer orientation in the enterprise network and a basis for further empirical validation. Practical implications – The research framework suggests that customer orientation practices may have a positive impact on network infrastructure design, practices, and performance outcomes. Implementation of customer orientation practices and outcomes within this research framework may allow management to meet customer requirements more effectively. Originality/value – This paper expands the concept of customer-orientation in the extended enterprise network context.",
"title": ""
},
{
"docid": "2b743ba2f607f75bb7e1d964c39cbbcf",
"text": "The demand and growth of indoor positioning has increased rapidly in the past few years for a diverse range of applications. Various innovative techniques and technologies have been introduced but precise and reliable indoor positioning still remains a challenging task due to dependence on a large number of factors and limitations of the technologies. Positioning technologies based on radio frequency (RF) have many advantages over the technologies utilizing ultrasonic, optical and infrared devices. Both narrowband and wideband RF systems have been implemented for short range indoor positioning/real-time locating systems. Ultra wideband (UWB) technology has emerged as a viable candidate for precise indoor positioning due its unique characteristics. This article presents a comparison of UWB and narrowband RF technologies in terms of modulation, throughput, transmission time, energy efficiency, multipath resolving capability and interference. Secondly, methods for measurement of the positioning parameters are discussed based on a generalized measurement model and, in addition, widely used position estimation algorithms are surveyed. Finally, the article provides practical UWB positioning systems and state-of-the-art implementations. We believe that the review presented in this article provides a structured overview and comparison of the positioning methods, algorithms and implementations in the field of precise UWB indoor positioning, and will be helpful for practitioners as well as for researchers to keep abreast of the recent developments in the field.",
"title": ""
},
{
"docid": "081b09442d347a4a29d8cc3978079f79",
"text": "The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.",
"title": ""
},
{
"docid": "94c892deba68fb7eed5eef34bc13b272",
"text": "In this paper, we examined the effectiveness of deep convolutional neural network (DCNN) for food photo recognition task. Food recognition is a kind of fine-grained visual recognition which is relatively harder problem than conventional image recognition. To tackle this problem, we sought the best combination of DCNN-related techniques such as pre-training with the large-scale ImageNet data, fine-tuning and activation features extracted from the pre-trained DCNN. From the experiments, we concluded the fine-tuned DCNN which was pre-trained with 2000 categories in the ImageNet including 1000 food-related categories was the best method, which achieved 78.77% as the top-1 accuracy for UEC-FOOD100 and 67.57% for UEC-FOOD256, both of which were the best results so far. In addition, we applied the food classifier employing the best combination of the DCNN techniques to Twitter photo data. We have achieved the great improvements on food photo mining in terms of both the number of food photos and accuracy. In addition to its high classification accuracy, we found that DCNN was very suitable for large-scale image data, since it takes only 0.03 seconds to classify one food photo with GPU.",
"title": ""
},
{
"docid": "30520912723d67f7d07881aa33cdf229",
"text": "OBJECTIVE\nA study to examine the incidence and characteristics of concussions among Canadian university athletes during 1 full year of football and soccer participation.\n\n\nDESIGN\nRetrospective survey.\n\n\nPARTICIPANTS\nThree hundred eighty Canadian university football and 240 Canadian university soccer players reporting to 1999 fall training camp. Of these, 328 football and 201 soccer players returned a completed questionnaire.\n\n\nMAIN OUTCOME MEASURES\nBased on self-reported symptoms, calculations were made to determine the number of concussions experienced during the previous full year of football or soccer participation, the duration of symptoms, the time for return to play, and any associated risk factors for concussions.\n\n\nRESULTS\nOf all the athletes who returned completed questionnaires, 70.4% of the football players and 62.7% of the soccer players had experienced symptoms of a concussion during the previous year. Only 23.4% of the concussed football players and 19.8% of the concussed soccer players realized they had suffered a concussion. More than one concussion was experienced by 84.6% of the concussed football players and 81.7% of the concussed soccer players. Examining symptom duration, 27.6% of all concussed football players and 18.8% of all concussed soccer players experienced symptoms for at least 1 day or longer. Tight end and defensive lineman were the positions most commonly affected in football, while goalies were the players most commonly affected in soccer. Variables that increased the odds of suffering a concussion during the previous year for football players included a history of a traumatic loss of consciousness or a recognized concussion in the past. Variables that increased the odds of suffering a concussion during the previous year for soccer players included a past history of a recognized concussion while playing soccer and being female.\n\n\nCONCLUSIONS\nUniversity football and soccer players seem to be experiencing a significant amount of concussions while participating in their respective sports. Variables that seem to increase the odds of suffering a concussion during the previous year for football and soccer players include a history of a recognized concussion. Despite being relatively common, symptoms of concussion may not be recognized by many players.",
"title": ""
},
{
"docid": "faa70d7d0bb9097abae6e93f23c42efe",
"text": "あらまし 直線位相特性を有するディジタルフィルタは信号処理の多くの応用において必要である.本論文で は,通過域に近似的直線位相特性を有するチェビシェフ型 IIRフィルタの設計について述べる.まず,阻止域の 指定された周波数点に多重零点を配置することにより,フィルタの平たんな阻止域が容易に実現できることを示 す.次に,通過域に複素 Remez アルゴリズムを適用し,フィルタの設計問題を固有値問題として定式化する. よって,固有値問題を解くことにより,フィルタ係数が容易に求められる.更に,反復計算を行い,通過域にお ける誤差関数の等リプル特性を得る.最後に,提案したチェビシェフ型フィルタと遅延器を並列接続し,近似的 直線位相特性を有する逆チェビシェフ型 IIRフィルタも同時に得られることを示す. キーワード IIR ディジタルフィルタ,チェビシェフ型フィルタ,近似的直線位相特性,固有値問題,複素 Remezアルゴリズム",
"title": ""
},
{
"docid": "07eb6616cec9d319b6d867de98ec577e",
"text": "We propose a new witness encryption based on Subset-Sum which achieves extractable security without relying on obfuscation and is more efficient than the existing ones. Our witness encryption employs multilinear maps of arbitrary order and it is independent of the implementations of multilinear maps. As an application, we construct a new timed-release encryption based on the Bitcoin protocol and extractable witness encryption. The novelty of our scheme is that the decryption key will be automatically revealed in the bitcoin block-chain when the block-chain reaches a certain length.",
"title": ""
},
{
"docid": "c3182fada2dc486fb338654b885cbbfe",
"text": "Traditional syllogisms involve sentences of the following simple forms: All X are Y , Some X are Y , No X are Y ; similar sentences with proper names as subjects, and identities between names. These sentences come with the natural semantics using subsets of a given universe, and so it is natural to ask about complete proof systems. Logical systems are important in this area due to the prominence of syllogistic arguments in human reasoning, and also to the role they have played in logic from Aristotle onwards. We present complete systems for the entire syllogistic fragment and many sub-fragments. These begin with the fragment of All sentences, for which we obtain one of the easiest completeness theorems in logic. The last system extends syllogistic reasoning with the classical boolean operations and cardinality comparisons.",
"title": ""
},
{
"docid": "3c13399d0c869e58830a7efb8f6832a8",
"text": "The use of supply frequencies above 50-60 Hz allows for an increase in the power density applied to the ozonizer electrode surface and an increase in ozone production for a given surface area, while decreasing the necessary peak voltage. Parallel-resonant converters are well suited for supplying the high capacitive load of ozonizers. Therefore, in this paper the current-fed parallel-resonant push-pull inverter is proposed as a good option to implement high-voltage high-frequency power supplies for ozone generators. The proposed converter is analyzed and some important characteristics are obtained. The design and implementation of the complete power supply are also shown. The UC3872 integrated circuit is proposed in order to operate the converter at resonance, allowing us to maintain a good response disregarding the changes in electric parameters of the transformer-ozonizer pair. Experimental results for a 50-W prototype are also provided.",
"title": ""
},
{
"docid": "4e8a27fd2e56dbc33e315bc9cb462239",
"text": "Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.",
"title": ""
},
{
"docid": "8e23ef656b501814fc44c609feebe823",
"text": "This paper proposes an approach for segmentation and semantic labeling of RGBD data based on the joint usage of geometrical clues and deep learning techniques. An initial oversegmentation is performed using spectral clustering and a set of NURBS surfaces is then fitted on the extracted segments. The input data are then fed to a Convolutional Neural Network (CNN) together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a per-pixel descriptor vector for each sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The couples of adjacent segments with higher similarity according to the CNN features are considered for merging and the NURBS surface fitting accuracy is used in order to understand if the selected couples correspond to a single surface. By combining the obtained segmentation with the descriptors from the CNN a set of labeled segments is obtained. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation and labeling.",
"title": ""
},
{
"docid": "8d0baafd435c44d8e2c1dcfccb755cd8",
"text": "Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or tuning hyperparameter of a machine learning algorithm. However, it has a serious limitation when the parameter space is high-dimensional as Bayesian optimization crucially depends on solving a global optimization of a surrogate utility function in the same sized dimensions. The surrogate utility function, known commonly as acquisition function is a continuous function but can be extremely sharp at high dimension having only a few peaks marooned in a large terrain of almost flat surface. Global optimization algorithms such as DIRECT are infeasible at higher dimensions and gradient-dependent methods cannot move if initialized in the flat terrain. We propose an algorithm that enables local gradient-dependent algorithms to move through the flat terrain by using a sequence of gross-tofiner Gaussian process priors on the objective function as we leverage two underlying facts a) there exists a large enough length-scales for which the acquisition function can be made to have a significant gradient at any location in the parameter space, and b) the extrema of the consecutive acquisition functions are close although they are different only due to a small difference in the length-scales. Theoretical guarantees are provided and experiments clearly demonstrate the utility of the proposed method on both benchmark test functions and real-world case studies.",
"title": ""
},
{
"docid": "1ed692fd2da9c4f6d75fe3c15c7a3492",
"text": "The objective of this preliminary study is to investigate whether educational video games can be integrated into a classroom with positive effects for the teacher and students. The challenges faced when introducing a video game into a classroom are twofold: overcoming the notion that a \"toy\" does not belong in the school and developing software that has real educational value while stimulating the learner. We conducted an initial pilot study with 39 second grade students using our mathematic drill software Skills Arena. Early data from the pilot suggests that not only do teachers and students enjoy using Skills Arena, students have exceeded our expectations by doing three times more math problems in 19 days than they would have using traditional worksheets. Based on this encouraging qualitative study, future work that focuses on quantitative benefits should likely uncover additional positive results.",
"title": ""
}
] |
scidocsrr
|
a1a91a598d7b604d5f69f20319a077d0
|
Developing Supply Chains in Disaster Relief Operations through Cross-sector Socially Oriented Collaborations : A Theoretical Model
|
[
{
"docid": "978c1712bf6b469059218697ea552524",
"text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
}
] |
[
{
"docid": "eff844ffdf2ef5408e23d98564d540f0",
"text": "The motions of wheeled mobile robots are largely governed by contact forces between the wheels and the terrain. Inasmuch as future wheel-terrain interactions are unpredictable and unobservable, high performance autonomous vehicles must ultimately learn the terrain by feel and extrapolate, just as humans do. We present an approach to the automatic calibration of dynamic models of arbitrary wheeled mobile robots on arbitrary terrain. Inputs beyond our control (disturbances) are assumed to be responsible for observed differences between what the vehicle was initially predicted to do and what it was subsequently observed to do. In departure from much previous work, and in order to directly support adaptive and predictive controllers, we concentrate on the problem of predicting candidate trajectories rather than measuring the current slip. The approach linearizes the nominal vehicle model and then calibrates the perturbative dynamics to explain the observed prediction residuals. Both systematic and stochastic disturbances are used, and we model these disturbances as functions over the terrain, the velocities, and the applied inertial and gravitational forces. In this way, we produce a model which can be used to predict behavior across all of state space for arbitrary terrain geometry. Results demonstrate that the approach converges quickly and produces marked improvements in the prediction of trajectories for multiple vehicle classes throughout the performance envelope of the platform, including during aggressive maneuvering.",
"title": ""
},
{
"docid": "43e39433013ca845703af053e5ef9e11",
"text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.",
"title": ""
},
{
"docid": "4a3496a835d3948299173b4b2767d049",
"text": "We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "e30d6fd14f091e188e6a6b86b6286609",
"text": "Assessing the spatio-temporal variations of surface water quality is important for water environment management. In this study, surface water samples are collected from 2008 to 2015 at 17 stations in the Ying River basin in China. The two pollutants i.e. chemical oxygen demand (COD) and ammonia nitrogen (NH3-N) are analyzed to characterize the river water quality. Cluster analysis and the seasonal Kendall test are used to detect the seasonal and inter-annual variations in the dataset, while the Moran's index is utilized to understand the spatial autocorrelation of the variables. The influence of natural factors such as hydrological regime, water temperature and etc., and anthropogenic activities with respect to land use and pollutant load are considered as driving factors to understand the water quality evolution. The results of cluster analysis present three groups according to the similarity in seasonal pattern of water quality. The trend analysis indicates an improvement in water quality during the dry seasons at most of the stations. Further, the spatial autocorrelation of water quality shows great difference between the dry and wet seasons due to sluices and dams regulation and local nonpoint source pollution. The seasonal variation in water quality is found associated with the climatic factors (hydrological and biochemical processes) and flow regulation. The analysis of land use indicates a good explanation for spatial distribution and seasonality of COD at the sub-catchment scale. Our results suggest that an integrated water quality measures including city sewage treatment, agricultural diffuse pollution control as well as joint scientific operations of river projects is needed for an effective water quality management in the Ying River basin.",
"title": ""
},
{
"docid": "6e5e6b361d113fa68b2ca152fbf5b194",
"text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.",
"title": ""
},
{
"docid": "c04ae9e3721f23b8b0a5b8306c25becb",
"text": "A transmission-line model is developed for predicting the response of a twisted-wire pair (TWP) circuit in the presence of a ground plane, illuminated by a plane-wave electromagnetic field. The twisted pair is modeled as an ideal bifilar helix, the total coupling is separated into differential- (DM) and common-mode (CM) contributions, and closed-form expressions are derived for the equivalent induced sources. Approximate upper bounds to the terminal response of electrically long lines are obtained, and a simplified low-frequency circuit model is used to explain the mechanism of field-to-wire coupling in a TWP above ground, as well as the role of load balancing on the DM and CM electromagnetic noise induced in the terminal loads.",
"title": ""
},
{
"docid": "1d9b1ce73d8d2421092bb5a70016a142",
"text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"title": ""
},
{
"docid": "6a23480588ca47b9e53de0fd4ff1ecb1",
"text": "We present the nested Chinese restaurant process (nCRP), a stochastic process that assigns probability distributions to ensembles of infinitely deep, infinitely branching trees. We show how this stochastic process can be used as a prior distribution in a Bayesian nonparametric model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of Bayesian nonparametric methods to infer distributions on flexible data structures.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "be06fc67973751b98dd07599e29e4b01",
"text": "The contactless version of the air-filled substrate integrated waveguide (AF-SIW) is introduced for the first time. The conventional AF-SIW configuration requires a pure and flawless connection of the covering layers to the intermediate substrate. To operate efficiently at high frequencies, this requires a costly fabrication process. In the proposed configuration, the boundary condition on both sides around the AF guiding medium is modified to obtain artificial magnetic conductor (AMC) boundary conditions. The AMC surfaces on both sides of the waveguide substrate are realized by a single-periodic structure with the new type of unit cells. The PEC–AMC parallel plates prevent the leakage of the AF guiding region. The proposed contactless AF-SIW shows low-loss performance in comparison with the conventional AF-SIW at millimeter-wave frequencies when the layers of both waveguides are connected poorly.",
"title": ""
},
{
"docid": "4283c9b6b679913648f758abeba2ab93",
"text": "A significant goal of natural language processing (NLP) is to devise a system capable of machine understanding of text. A typical system can be tested on its ability to answer questions based on a given context document. One appropriate dataset for such a system is the Stanford Question Answering Dataset (SQuAD), a crowdsourced dataset of over 100k (question, context, answer) triplets. In this work, we focused on creating such a question answering system through a neural net architecture modeled after the attentive reader and sequence attention mix models.",
"title": ""
},
{
"docid": "285587e0e608d8bafa0962b5cf561205",
"text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "184d34ef560809aad938c0e08939a1bb",
"text": "Mechanical engineers apply principles of motion, energy, force, materials, and mathematics to design and analyze a wide variety of products and systems. The field requires an understanding of core concepts including mechanics, kinematics, thermodynamics, heat transfer, materials science and controls. Mechanical engineers use these core principles along with tools like computer-aided engineering and product life cycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, automotive systems, aircraft, robotics, medical devices, and more. Today, mechanical engineers are pursuing developments in such fields as composites, mechatronics, and nanotechnology, and are helping to create a more sustainable future.",
"title": ""
},
{
"docid": "69dea04dc13754f7f89a1e7b7d973659",
"text": "The nature of congestion feedback largely governs the behavior of congestion control. In datacenter networks, where RTTs are in hundreds of microseconds, accurate feedback is crucial to achieve both high utilization and low queueing delay. Proposals for datacenter congestion control predominantly leverage ECN or even explicit innetwork feedback (e.g., RCP-type feedback) to minimize the queuing delay. In this work we explore latency-based feedback as an alternative and show its advantages over ECN. Against the common belief that such implicit feedback is noisy and inaccurate, we demonstrate that latencybased implicit feedback is accurate enough to signal a single packet’s queuing delay in 10 Gbps networks. DX enables accurate queuing delay measurements whose error falls within 1.98 and 0.53 microseconds using software-based and hardware-based latency measurements, respectively. This enables us to design a new congestion control algorithm that performs fine-grained control to adjust the congestion window just enough to achieve very low queuing delay while attaining full utilization. Our extensive evaluation shows that 1) the latency measurement accurately reflects the one-way queuing delay in single packet level; 2) the latency feedback can be used to perform practical and fine-grained congestion control in high-speed datacenter networks; and 3) DX outperforms DCTCP with 5.33x smaller median queueing delay at 1 Gbps and 1.57x at 10 Gbps.",
"title": ""
},
{
"docid": "3d2060ef33910ef1c53b0130f3cc3ffc",
"text": "Recommender systems help users deal with information overload and enjoy a personalized experience on the Web. One of the main challenges in these systems is the item cold-start problem which is very common in practice since modern online platforms have thousands of new items published every day. Furthermore, in many real-world scenarios, the item recommendation tasks are based on users’ implicit preference feedback such as whether a user has interacted with an item. To address the above challenges, we propose a probabilistic modeling approach called Neural Semantic Personalized Ranking (NSPR) to unify the strengths of deep neural network and pairwise learning. Specifically, NSPR tightly couples a latent factor model with a deep neural network to learn a robust feature representation from both implicit feedback and item content, consequently allowing our model to generalize to unseen items. We demonstrate NSPR’s versatility to integrate various pairwise probability functions and propose two variants based on the Logistic and Probit functions. We conduct a comprehensive set of experiments on two real-world public datasets and demonstrate that NSPR significantly outperforms the state-of-the-art baselines.",
"title": ""
},
{
"docid": "836f0a9a843802dda2b9ca7b166ef5f8",
"text": "Article history: Available online xxxx",
"title": ""
}
] |
scidocsrr
|
60f63d99f7e8b5b0cbd892a65ccb2833
|
Fetus-in-fetu: a pediatric rarity
|
[
{
"docid": "d1be704e4d81ab1466482a4924f00474",
"text": "Fetus-in-fetu (FIF) is a rare congenital condition in which a fetiform mass is detected in the host abdomen and also in other sites such as the intracranium, thorax, head, and neck. This condition has been rarely reported in the literature. Herein, we report the case of a fetus presenting with abdominal cystic mass and ascites and prenatally diagnosed as meconium pseudocyst. Explorative laparotomy revealed an irregular fetiform mass in the retroperitoneum within a fluid-filled cyst. The mass contained intestinal tract, liver, pancreas, and finger. Fetal abdominal cystic mass has been identified in a broad spectrum of diseases. However, as in our case, FIF is often overlooked during differential diagnosis. FIF should also be differentiated from other conditions associated with fetal abdominal masses.",
"title": ""
},
{
"docid": "972288070e8950cdb38410c30758d708",
"text": "INTRODUCTION\nFetus in fetu is an extremely rare condition wherein a malformed fetus is found in the abdomen of its twin. This entity is differentiated from teratoma by its embryological origin, its unusual location in the retroperitoneal space, and the presence of vertebral organization with limb buds and well-developed organ systems. The literature cites less than 100 cases worldwide of twin fetus in fetu.\n\n\nCASE PRESENTATION\nA two-and-a-half-month-old Asian Indian baby boy had two malformed fetuses in his abdomen. The pre-operative diagnosis was made by performing an ultrasound and a 64-slice computer tomography scan of the baby's abdomen. Two fetoid-like masses were successfully excised from the retroperitoneal area of his abdomen. A macroscopic examination, an X-ray of the specimen after operation, and the histological features observed were suggestive of twin fetus in fetu.\n\n\nCONCLUSION\nFetus in fetu is an extremely rare condition. Before any operation is carried out on a patient, imaging studies should first be conducted to differentiate this condition from teratoma. Surgical excision is a curative procedure, and a macroscopic examination of the sac should be done after twin or multiple fetus in fetu are excised.",
"title": ""
}
] |
[
{
"docid": "e7ecd827a48414f1f533fb30de203a6a",
"text": "Followership has been an understudied topic in the academic literature and an underappreciated topic among practitioners. Although it has always been important, the study of followership has become even more crucial with the advent of the information age and dramatic changes in the workplace. This paper provides a fresh look at followership by providing a synthesis of the literature and presents a new model for matching followership styles to leadership styles. The model’s practical value lies in its usefulness for describing how leaders can best work with followers, and how followers can best work with leaders.",
"title": ""
},
{
"docid": "3a91fef8ea690b5027e70ae1051ad136",
"text": "We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial (and arbitrary), we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of four letter words, capturing ∼ 92% of the multi–information among letters and even ‘discovering’ real words that were not represented in the data from which the pairwise correlations were estimated. The maximum entropy model defines an energy landscape on the space of possible words, and local minima in this landscape account for nearly two–thirds of words used in written English.",
"title": ""
},
{
"docid": "d437d71047b70736f5a6cbf3724d62a9",
"text": "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"title": ""
},
{
"docid": "13a23fe61319bc82b8b3e88ea895218c",
"text": "A new generation of robots is being designed for human occupied workspaces where safety is of great concern. This research demonstrates the use of a capacitive skin sensor for collision detection. Tests demonstrate that the sensor reduces impact forces and can detect and characterize collision events, providing information that may be used in the future for force reduction behaviors. Various parameters that affect collision severity, including interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth are also explored using the sensor to provide information about the contact force at the site of impact. Joint stiffness is made independent of controller bandwidth limitations using passive torsional springs of various stiffnesses. Results indicate a positive correlation between peak impact force and joint stiffness, skin friction and interface stiffness, with implications for future skin and robot link designs and post-collision behaviors.",
"title": ""
},
{
"docid": "d9791131cefcf0aa18befb25c12b65b2",
"text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.",
"title": ""
},
{
"docid": "4453c85d0fc1513e9657731d84896864",
"text": "A number of studies have looked at the prevalence rates of psychiatric disorders in the community in Pakistan over the last two decades. However, a very little information is available on psychiatric morbidity in primary health care. We therefore decided to measure prevalence of psychiatric disorders and their correlates among women from primary health care facilities in Lahore. We interviewed 650 women in primary health care settings in Lahore. We used a semi-structured interview and questionnaires to collect information during face-to-face interviews. Nearly two-third of the women (64.3%) in our study were diagnosed to have a psychiatric problem, while one-third (30.4%) suffered with Major Depressive Disorder. Stressful life events, verbal violence and battering were positively correlated with psychiatric morbidity and social support, using reasoning to resolve conflicts and education were negatively correlated with psychiatric morbidity. The prevalence of psychiatric disorders is in line with the prevalence figures found in community studies. Domestic violence is an important correlate which can be the focus of interventions.",
"title": ""
},
{
"docid": "367782d15691c3c1dfd25220643752f0",
"text": "Music streaming services increasingly incorporate additional music taxonomies (i.e., mood, activity, and genre) to provide users different ways to browse through music collections. However, these additional taxonomies can distract the user from reaching their music goal, and influence choice satisfaction. We conducted an online user study with an application called \"Tune-A-Find,\" where we measured participants' music taxonomy choice (mood, activity, and genre). Among 297 participants, we found that the chosen taxonomy is related to personality traits. We found that openness to experience increased the choice for browsing music by mood, while conscientiousness increased the choice for browsing music by activity. In addition, those high in neuroticism were most likely to browse for music by activity or genre. Our findings can support music streaming services to further personalize user interfaces. By knowing the user's personality, the user interface can adapt to the user's preferred way of music browsing.",
"title": ""
},
{
"docid": "2ee9ed8260e63721b8525724b0d65d5e",
"text": "Deep neural network classifiers are vulnerable to small input perturbations carefully generated by the adversaries. Injecting adversarial inputs during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). This additional regularization encourages two similar images (clean and perturbed versions) to produce the same outputs, not necessarily the true labels, enhancing classifier’s robustness against pixel level perturbation. Next, we show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we also propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhance the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.",
"title": ""
},
{
"docid": "3a5ef0db1fbbebd7c466a3b657e5e173",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "c313450c7a72941060432d4e000d8ba0",
"text": "We propose an approach to generate geometric theorems from electronic images of diagrams automatically. The approach makes use of techniques of Hough transform to recognize geometric objects and their labels and of numeric verification to mine basic geometric relations. Candidate propositions are generated from the retrieved information by using six strategies and geometric theorems are obtained from the candidates via algebraic computation. Experiments with a preliminary implementation illustrate the effectiveness and efficiency of the proposed approach for generating nontrivial theorems from images of diagrams. This work demonstrates the feasibility of automated discovery of profound geometric knowledge from simple image data and has potential applications in geometric knowledge management and education.",
"title": ""
},
{
"docid": "b4002e27c1c656d71dc4277ea0cca9a9",
"text": "This paper proposes a distributionally robust approach to logistic regression. We use the Wasserstein distance to construct a ball in the space of probability distributions centered at the uniform distribution on the training samples. If the radius of this ball is chosen judiciously, we can guarantee that it contains the unknown datagenerating distribution with high confidence. We then formulate a distributionally robust logistic regression model that minimizes a worst-case expected logloss function, where the worst case is taken over all distributions in the Wasserstein ball. We prove that this optimization problem admits a tractable reformulation and encapsulates the classical as well as the popular regularized logistic regression problems as special cases. We further propose a distributionally robust approach based on Wasserstein balls to compute upper and lower confidence bounds on the misclassification probability of the resulting classifier. These bounds are given by the optimal values of two highly tractable linear programs. We validate our theoretical out-of-sample guarantees through simulated and empirical experiments.",
"title": ""
},
{
"docid": "eefb6ec5984b6641baedecc0bf3b44c4",
"text": "Gradient descent is prevalent for large-scale optimization problems in machine learning; especially it nowadays plays a major role in computing and correcting the connection strength of neural networks in deep learning. However, many gradient-based optimization methods contain more sensitive hyper-parameters which require endless ways of configuring. In this paper, we present a novel adaptive mechanism called adaptive exponential decay rate (AEDR). AEDR uses an adaptive exponential decay rate rather than a fixed and preconfigured one, and it can allow us to eliminate one otherwise tuning sensitive hyper-parameters. AEDR also can be used to calculate exponential decay rate adaptively by employing the moving average of both gradients and squared gradients over time. The mechanism is then applied to Adadelta and Adam; it reduces the number of hyper-parameters of Adadelta and Adam to only a single one to be turned. We use neural network of long short-term memory and LeNet to demonstrate how learning rate adapts dynamically. We show promising results compared with other state-of-the-art methods on four data sets, the IMDB (movie reviews), SemEval-2016 (sentiment analysis in twitter) (IMDB), CIFAR-10 and Pascal VOC-2012.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "ea411e1666cf9f9e1220b0ec642d45de",
"text": "The night sky remains a largely unexplored frontier for biologists studying the behavior and physiology of free-ranging, nocturnal organisms. Conventional imaging tools and techniques such as night-vision scopes, infrared-reflectance cameras, flash cameras, and radar provide insufficient detail for the scale and resolution demanded by field researchers. A new tool is needed that is capable of imaging noninvasively in the dark at high-temporal and spatial resolution. Thermal infrared imaging represents the most promising such technology that is poised to revolutionize our ability to observe and document the behavior of free-ranging organisms in the dark. Herein we present several examples from our research on free-ranging bats that highlight the power and potential of thermal infrared imaging for the study of animal behavior, energetics and censusing of large colonies, among others. Using never-before-seen video footage and data, we have begun to answer questions that have puzzled biologists for decades, as well as to generate new hypotheses and insight. As we begin to appreciate the functional significance of the aerosphere as a dynamic environment that affects organisms at different spatial and temporal scales, thermal infrared imaging can be at the forefront of the effort to explore this next frontier.",
"title": ""
},
{
"docid": "bbe503ddce5f16bd968e4419d74e805b",
"text": "The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of “FinTech,” which represents the marriage of “finance” and “information technology.” FinTech provides opportunities for the creation of new services and business models and poses challenges to traditional financial service providers. Therefore, FinTech has become a subject of debate among practitioners, investors, and researchers and is highly visible in the popular media. In this study, we unveil the drivers motivating the FinTech phenomenon perceived by the English and German popular press including the subjects discussed in the context of FinTech. This study is the first one to reflect the media perspective on the FinTech phenomenon in the research. In doing so, we extend the growing knowledge on FinTech and contribute to a common understanding in the financial and digital innovation literature. These study contributes to research in the areas of information systems, finance and interdisciplinary social sciences. Moreover, it brings value to practitioners (entrepreneurs, investors, regulators, etc.), who explore the field of FinTech.",
"title": ""
},
{
"docid": "d12d51010fcf4433c5a74a6fbead5cb5",
"text": "This paper introduces the power-density and temperature induced issues in the modern on-chip systems. In particular, the emerging Dark Silicon problem is discussed along with critical research challenges. Afterwards, an overview of key research efforts and concepts is presented that leverage dark silicon for performance and reliability optimization. In case temperature constraints are violated, an efficient dynamic thermal management technique is employed.",
"title": ""
},
{
"docid": "9d8b0a97eb195c972c1c0d989625a600",
"text": "Emerging millimeter-wave frequency applications require high performance, low-cost and compact devices and circuits. This is the reason why the Substrate Integrated Waveguide (SIW) technology, which combines some advantages of planar circuits and metallic waveguides, has focused a lot of attention in recent years. However, not all three-dimensional metallic waveguide devices and circuit are integrable in planar form. In its first section, this paper reviews recently proposed three-dimensional SIW devices that are taking advantages of the third-dimension to achieve either more compact or multidimensional circuits at millimeter wave frequencies. Also, in a second section, special interest is oriented to recent development of air-filled SIW based on low-cost multilayer printed circuit board (PCB) for high performance millimeter-wave substrate integrated circuits and systems.",
"title": ""
},
{
"docid": "6341ff36d4cdbc10f4bd864c95c89be2",
"text": "OBJECTIVE\nThe aim of this study was to evaluate the antibiotic resistance pattern of Psedomonas aeruginosa and its prevalence in patients with urinary tract infections (UTI) for effective treatment in a developing country like Pakistan.\n\n\nMETHODS\nThis is an observational study conducted for a period of ten months which ended on December 2013 at the Dr. Essa Laboratory and Diagnostic Centre in Karachi. A total of 4668 urine samples of UTI patients were collected and standard microbiological techniques were performed to identify the organisms in urine cultures. Antibiotic susceptibility testing was performed by Kirby-Bauer technique for twenty five commonly used antimicrobials and then analyzed on SPSS version 17.\n\n\nRESULTS\nP. aeruginosa was isolated in 254 cultures (5.4%). The most resistant drugs included Ceclor(100%) and Cefizox (100%) followed by Amoxil/Ampicillin (99.6%), Ceflixime (99.6%), Doxycycline (99.6%), Cefuroxime (99.2%), Cephradine (99.2%), Cotrimoxazole (99.2%), Nalidixic acid (98.8%), Pipemidic acid (98.6%) and Augmentin (97.6%).\n\n\nCONCLUSION\nEmerging resistant strains of Pseudomonas aeruginosa are potentially linked to injudicious use of drugs leading to ineffective empirical therapy and in turn, appearance of even more resistant strains of the bacterium. Therefore, we recommend culture and sensitivity testing to determine the presence of P.aeruginosa prior to specific antimicrobial therapy.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "5d52830a1f24dfb74f9425dbc376728e",
"text": "In this paper, the performance of air-cored (ironless) stator axial flux permanent magnet machines with different types of concentrated-coil nonoverlapping windings is evaluated. The evaluation is based on theoretical analysis and is confirmed by finite-element analysis and measurements. It is shown that concentrated-coil winding machines can have a similar performance as that of normal overlapping winding machines using less copper.",
"title": ""
}
] |
scidocsrr
|
dfbec835ba0f612d07adc904ec5d3aa5
|
Reduced-Complexity Delayed-Decision Algorithm for Context-Based Image Processing Systems
|
[
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
}
] |
[
{
"docid": "fbaf790dd8a59516bc4d1734021400fd",
"text": "With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective - a cutting-edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.",
"title": ""
},
{
"docid": "6e36103ba9f21103252141ad4a53b4ac",
"text": "In this paper, we describe the binary classification of sentences into idiomatic and non-idiomatic. Our idiom detection algorithm is based on linear discriminant analysis (LDA). To obtain a discriminant subspace, we train our model on a small number of randomly selected idiomatic and non-idiomatic sentences. We then project both the training and the test data on the chosen subspace and use the three nearest neighbor (3NN) classifier to obtain accuracy. The proposed approach is more general than the previous algorithms for idiom detection — neither does it rely on target idiom types, lexicons, or large manually annotated corpora, nor does it limit the search space by a particular linguistic con-",
"title": ""
},
{
"docid": "dbca7415a584b3a8b9348c47d5ab2fa4",
"text": "The shared nature of the medium in wireless networks makes it easy for an adversary to launch a Wireless Denial of Service (WDoS) attack. Recent studies, demonstrate that such attacks can be very easily accomplished using off-the-shelf equipment. To give a simple example, a malicious node can continually transmit a radio signal in order to block any legitimate access to the medium and/or interfere with reception. This act is called jamming and the malicious nodes are referred to as jammers. Jamming techniques vary from simple ones based on the continual transmission of interference signals, to more sophisticated attacks that aim at exploiting vulnerabilities of the particular protocol used. In this survey, we present a detailed up-to-date discussion on the jamming attacks recorded in the literature. We also describe various techniques proposed for detecting the presence of jammers. Finally, we survey numerous mechanisms which attempt to protect the network from jamming attacks. We conclude with a summary and by suggesting future directions.",
"title": ""
},
{
"docid": "5d8f93576d40b638c16ccbd9db8062a2",
"text": "Point-of-interest (POI) denotes the category of a location people visit, reflecting the daily life and personal interests. Through a massive dataset of check-in records, this paper discloses that the POI visiting pattern of human spatial mobility falling into both Zipf's law and Heaps' law. It is found that people always have three potential choices when selecting a particular POI as their next travel destination, namely, exploring a new POI, returning to a visited one, or staying at the current one. We calculate the probabilities of these scenarios and propose a statistical model based on random walk to represent such phenomena. Simulation results show that our model can well reproduce the heterogeneous Zipf's law and Heaps' law of human mobility.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "0084faef0e08c4025ccb3f8fd50892f1",
"text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.",
"title": ""
},
{
"docid": "f958c7d3d27ee79c9dee944716139025",
"text": "We present a tunable flipflop-based frequency divider and a fully differential push-push VCO designed in a 200GHz fT SiGe BiCMOS technology. A new technique for tuning the sensitivity of the divider in the frequency range of interest is presented. The chip works from 60GHz up to 113GHz. The VCO is based on a new topology which allows generating differential push-push outputs. The VCO shows a tuning range larger than 7GHz. The phase noise is 75dBc/Hz at 100kHz offset. The chip shows a frequency drift of 12.3MHz/C. The fundamental signal suppression is larger than 50dB. The output power is 2×5dBm. At a 3.3V supply, the circuits consume 35mA and 65mA, respectively.",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "004b9c1adb0e217c89b2266348d9bd88",
"text": "Branch-and-bound implicit enumeration algorithms for permutation problems (discrete optimization problems where the set of feasible solutions is the permutation group <italic>S<subscrpt>n</subscrpt></italic>) are characterized in terms of a sextuple (<italic>B<subscrpt>p</subscrpt> S,E,D,L,U</italic>), where (1) <italic>B<subscrpt>p</subscrpt></italic> is the branching rule for permutation problems, (2) <italic>S</italic> is the next node selection rule, (3) <italic>E</italic> is the set of node elimination rules, (4) <italic>D</italic> is the node dominance function, (5) <italic>L</italic> is the node lower-bound cost function, and (6) <italic>U</italic> is an upper-bound solution cost. A general algorithm based on this characterization is presented and the dependence of the computational requirements on the choice of algorithm parameters, <italic>S, E, D, L,</italic> and <italic>U</italic> is investigated theoretically. The results verify some intuitive notions but disprove others.",
"title": ""
},
{
"docid": "45ec4615b6cc593011eb9a7b714fb325",
"text": "There has been a drive recently to make sensor data accessible on the Web. However, because of the vast number of sensors collecting data about our environment, finding relevant sensors on the Web is a non-trivial challenge. In this paper, we present an approach to discovering sensors through a standard service interface over Linked Data. This is accomplished with a semantic sensor network middleware that includes a sensor registry on Linked Data and a sensor discovery service that extends the OGC Sensor Web Enablement. With this approach, we are able to access and discover sensors that are positioned near named-locations of interest.",
"title": ""
},
{
"docid": "61225cc75aac3bd6b61d7a45ad4ceb1f",
"text": "We present a pipeline of algorithms that decomposes a given polygon model into parts such that each part can be 3D printed with high (outer) surface quality. For this we exploit the fact that most 3D printing technologies have an anisotropic resolution and hence the surface smoothness varies significantly with the orientation of the surface. Our pipeline starts by segmenting the input surface into patches such that their normals can be aligned perpendicularly to the printing direction. A 3D Voronoi diagram is computed such that the intersections of the Voronoi cells with the surface approximate these surface patches. The intersections of the Voronoi cells with the input model’s volume then provide an initial decomposition. We further present an algorithm to compute an assembly order for the parts and generate connectors between them. A post processing step further optimizes the seams between segments to improve the visual quality. We run our pipeline on a wide range of 3D models and experimentally evaluate the obtained improvements in terms of numerical, visual, and haptic quality.",
"title": ""
},
{
"docid": "f3a4f5bd47e978d3c74aa5dbfe93f9f9",
"text": "We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-ofspeech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V2) that is four times larger than the (unlabeled) TWEEBANK V1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-ofthe-art on other treebanks in both accuracy and speed.",
"title": ""
},
{
"docid": "4ecd27822fee036150b1c8f3db70c679",
"text": "Despite the proliferation of e-services, they are still characterized by uncertainties. As result, consumer trust beliefs are considered an important determinant of e-service adoption. Past work has not however considered the potentially dynamic nature of these trust beliefs, and how early-stage trust might influence later-stage adoption and use. To address this gap, this study draws on the theory of reasoned action and expectation-confirmation theory to carry out a longitudinal study of trust in eservices. Specifically, we examine how trust interacts with other consumer beliefs, such as perceived usefulness, and how together these beliefs influence consumer intentions and actual behaviours toward e-services at both initial and later stages of use. The empirical context is online health information services. Data collection was carried out at two time periods, approximately 7 weeks apart using a student population. The results show that perceived usefulness and trust are important at both initial and later stages in consumer acceptance of online health services. Consumers’ actual usage experiences modify perceptions of usefulness and influence the confirmation of their initial expectations. These results have implications for our understanding of the dynamic nature of trust and perceived usefulness, and their roles in long term success of e-services.",
"title": ""
},
{
"docid": "2ee8910adbdff2111d64b9a06242050f",
"text": "Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.",
"title": ""
},
{
"docid": "441f80a25e7a18760425be5af1ab981d",
"text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.",
"title": ""
},
{
"docid": "4df6bbfaa8842d88df0b916946c59ea3",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "e786d22cd1c30014d1a1dcdc655a56fb",
"text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.",
"title": ""
},
{
"docid": "727a53dad95300ee9749c13858796077",
"text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.",
"title": ""
},
{
"docid": "857a2098e5eb48340699c6b7a29ec293",
"text": "Compressibiity of individuai sequences by the ciam of generaihd finite-atate information-losales encoders ia investigated These encodersrpnoperateinavariabie-ratemodeasweUasaflxedrateone,nnd they aiiow for any fhite-atate acheme of variabie-iength-to-variable-ien@ coding. For every individuai hfiite aeqence x a quantity p (x) ia defined, calledthecompressibilityofx,whirhisshowntobetheasymptotieatly attainable lower bound on the compression ratio tbat cao be achieved for x by any finite-state encoder. ‘flds is demonstrated by means of a amatructivecodtngtbeoremanditsconversethat,apartfnnntheirafymptotic significance, also provide useful performance criteria for finite and practicai data-compression taaka. The proposed concept of compressibility ia aiao shown to play a role analogous to that of entropy in ciaasicai informatfon theory where onedeaia with probabilistic ensembles of aequencea ratk Manuscript received June 10, 1977; revised February 20, 1978. J. Ziv is with Bell Laboratories, Murray Hill, NJ 07974, on leave from the Department of Electrical Engineering, Techmon-Israel Institute of Technology, Halfa, Israel. A. Lempel is with Sperry Research Center, Sudbury, MA 01776, on leave from the Department of Electrical Engineer@, Technion-Israel Institute of Technology, Haifa, Israel. tium with individuai sequences. Widie the delinition of p (x) aiiows a different machine for each different sequence to be compresse4 the constructive coding theorem ieada to a universal algorithm that is aaymik toticaiiy optfmai for au sequencea.",
"title": ""
}
] |
scidocsrr
|
b757e6effd0a6ac1b860669d62f0b730
|
Temporal Relational Ranking for Stock Prediction
|
[
{
"docid": "a13788dcda6ba9caa99e3b6b5dab73f9",
"text": "Our research examines a predictive machine learning approach for financial news articles analysis using several different textual representations: bag of words, noun phrases, and named entities. Through this approach, we investigated 9,211 financial news articles and 10,259,042 stock quotes covering the S&P 500 stocks during a five week period. We applied our analysis to estimate a discrete stock price twenty minutes after a news article was released. Using a support vector machine (SVM) derivative specially tailored for discrete numeric prediction and models containing different stock-specific variables, we show that the model containing both article terms and stock price at the time of article release had the best performance in closeness to the actual future stock price (MSE 0.04261), the same direction of price movement as the future price (57.1% directional accuracy) and the highest return using a simulated trading engine (2.06% return). We further investigated the different textual representations and found that a Proper Noun scheme performs better than the de facto standard of Bag of Words in all three metrics.",
"title": ""
},
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
}
] |
[
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "26095dbc82b68c32881ad9316256bc42",
"text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.",
"title": ""
},
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "ef98936202fea16571be47ee629b0955",
"text": "Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical topics like composition, decomposition, domains, and ranges of the induced translation classes. The extension with regular look-ahead is considered. 0 1985 Academic Press, Inc.",
"title": ""
},
{
"docid": "84499d49c5e2d7ed9f30b754329d5175",
"text": "The evolution of natural ecosystems is controled by a high level of biodiversity, In sharp contrast, intensive agricultural systems involve monocultures associated with high input of chemical fertilisers and pesticides. Intensive agricultural systems have clearly negative impacts on soil and water quality and on biodiversity conservation. Alternatively, cropping systems based on carefully designed species mixtures reveal many potential advantages under various conditions, both in temperate and tropical agriculture. This article reviews those potential advantages by addressing the reasons for mixing plant species; the concepts and tools required for understanding and designing cropping systems with mixed species; and the ways of simulating multispecies cropping systems with models. Multispecies systems are diverse and may include annual and perennial crops on a gradient of complexity from 2 to n species. A literature survey shows potential advantages such as (1) higher overall productivity, (2) better control of pests and diseases, (3) enhanced ecological services and (4) greater economic profitability. Agronomic and ecological conceptual frameworks are examined for a clearer understanding of cropping systems, including the concepts of competition and facilitation, above- and belowground interactions and the types of biological interactions between species that enable better pest management in the system. After a review of existing models, future directions in modelling plant mixtures are proposed. We conclude on the need to enhance agricultural research on these multispecies systems, combining both agronomic and ecological concepts and tools.",
"title": ""
},
{
"docid": "192e1bd5baa067b563edb739c05decfa",
"text": "This paper presents a simple and accurate design methodology for LLC resonant converters, based on a semi- empirical approach to model steady-state operation in the \"be- low-resonance\" region. This model is framed in a design strategy that aims to design a converter capable of operating with soft-switching in the specified input voltage range with a load ranging from zero up to the maximum specified level.",
"title": ""
},
{
"docid": "4fa13d98d3d4347b4759a334e9e6298e",
"text": "OBJECTIVE\nTo present estimates of the lifetime prevalence of DSM-IV mental disorders with and without severe impairment, their comorbidity across broad classes of disorder, and their sociodemographic correlates.\n\n\nMETHOD\nThe National Comorbidity Survey-Adolescent Supplement NCS-A is a nationally representative face-to-face survey of 10,123 adolescents aged 13 to 18 years in the continental United States. DSM-IV mental disorders were assessed using a modified version of the fully structured World Health Organization Composite International Diagnostic Interview.\n\n\nRESULTS\nAnxiety disorders were the most common condition (31.9%), followed by behavior disorders (19.1%), mood disorders (14.3%), and substance use disorders (11.4%), with approximately 40% of participants with one class of disorder also meeting criteria for another class of lifetime disorder. The overall prevalence of disorders with severe impairment and/or distress was 22.2% (11.2% with mood disorders, 8.3% with anxiety disorders, and 9.6% behavior disorders). The median age of onset for disorder classes was earliest for anxiety (6 years), followed by 11 years for behavior, 13 years for mood, and 15 years for substance use disorders.\n\n\nCONCLUSIONS\nThese findings provide the first prevalence data on a broad range of mental disorders in a nationally representative sample of U.S. adolescents. Approximately one in every four to five youth in the U.S. meets criteria for a mental disorder with severe impairment across their lifetime. The likelihood that common mental disorders in adults first emerge in childhood and adolescence highlights the need for a transition from the common focus on treatment of U.S. youth to that of prevention and early intervention.",
"title": ""
},
{
"docid": "5e24b62458331cf88e9e606ae0b381b6",
"text": "People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a \"second-order\" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record",
"title": ""
},
{
"docid": "226f84ed038a4509d9f3931d7df8b977",
"text": "Physically Asynchronous/Logically Synchronous (PALS) is an architecture pattern that allows developers to design and verify a system as though all nodes executed synchronously. The correctness of PALS protocol was formally verified. However, the implementation of PALS adds additional code that is otherwise not needed. In our case, we have a middleware (PALSWare) that supports PALS systems. In this paper, we introduce a verification framework that shows how we can apply Software Model Checking (SMC) to verify a PALS system at the source code level. SMC is an automated and exhaustive source code checking technology. Compared to verifying (hardware or software) models, verifying the actual source code is more useful because it minimizes any chance of false interpretation and eliminates the possibility of missing software bugs that were absent in the model but introduced during implementation. In other words, SMC reduces the semantic gap between what is verified and what is executed. Our approach is compositional, i.e., the verification of PALSWare is done separately from applications. Since PALSWare is inherently concurrent, to verify it via SMC we must overcome the statespace explosion problem, which arises from concurrency and asynchrony. To this end, we develop novel simplification abstractions, prove their soundness, and then use these abstractions to reduce the verification of a system with many threads to verifying a system with a relatively small number of threads. When verifying an application, we leverage the (already verified) synchronicity guarantees provided by the PALSWare to reduce the verification complexity significantly. Thus, our approach uses both “abstraction” and “composition”, the two main techniques to reduce statespace explosion. This separation between verification of PALSWare and applications also provides better management against upgrades to either. We validate our approach by verifying the current PALSWare implementation, and several PALSWare-based distributed real time applications.",
"title": ""
},
{
"docid": "af9f4dc24ca90a884ca85e94daa2547e",
"text": "Congenital web neck is a deformity hardly ever reported in the English literature. It is usually associated to Ulrrich-Turner syndrome. There are several options to correct this deformity, but in severe cases complete correction of the web and the abnormal back hair is not always possible. We present our experience with a secondary case where previous butterfly method was employed, a combined procedure was used achieving a satisfactory result. We considered that this technique is useful and offers an important improvement of the contour.",
"title": ""
},
{
"docid": "8e6677e03f964984e87530afad29aef3",
"text": "University of Jyväskylä, Department of Computer Science and Information Systems, PO Box 35, FIN-40014, Finland; Agder University College, Department of Information Systems, PO Box 422, 4604, Kristiansand, Norway; University of Toronto, Faculty of Information Studies, 140 St. George Street, Toronto, ON M5S 3G6, Canada; University of Oulu, Department of Information Processing Science, University of Oulu, PO Box 3000, FIN-90014, Finland Abstract Innovations in network technologies in the 1990’s have provided new ways to store and organize information to be shared by people and various information systems. The term Enterprise Content Management (ECM) has been widely adopted by software product vendors and practitioners to refer to technologies used to manage the content of assets like documents, web sites, intranets, and extranets In organizational or inter-organizational contexts. Despite this practical interest ECM has received only little attention in the information systems research community. This editorial argues that ECM provides an important and complex subfield of Information Systems. It provides a framework to stimulate and guide future research, and outlines research issues specific to the field of ECM. European Journal of Information Systems (2006) 15, 627–634. doi:10.1057/palgrave.ejis.3000648",
"title": ""
},
{
"docid": "d49825f64cda7772717d6e1f9c40d002",
"text": "The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.",
"title": ""
},
{
"docid": "228a777c356591c4d1944e645c04a106",
"text": "Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.",
"title": ""
},
{
"docid": "f4be6b2bf1cd462ec758fe37b098eef1",
"text": "Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in nonstationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.",
"title": ""
},
{
"docid": "54d61b3720be1a6a4aa236a51af72e0d",
"text": "In 2008 Bitcoin was introduced as the first decentralized electronic cash system and it has seen widespread adoption since it became fully functional in 2009. This thesis describe the Bitcoin system, anonymity aspects for Bitcoin and how we can use cryptography to improve anonymity by a scheme called Zerocoin. The Bitcoin system will be described with focus on transactions and the blockchain where all transactions are recorded. We look more closely into anonymity in terms of address unlinkability and illustrate how the anonymity provided is insufficient by clustering addresses. Further we describe Zerocoin, a decentralized electronic cash scheme designed to cryptographically improve the anonymity guarantees in Bitcoin by breaking the link between individual Bitcoin transactions. We detail the construction of Zerocoin, provide security analysis and describe how it integrates into Bitcoin.",
"title": ""
},
{
"docid": "5640d9307fa3d1b611358d3f14d5fb4c",
"text": "An N-LDMOS ESD protection device with drain back and PESD optimization design is proposed. With PESD layer enclosing the N+ drain region, a parasitic SCR is created to achieve high ESD level. When PESD is close to gate, the turn-on efficiency can be further improved (Vt1: 11.2 V reduced to 7.2 V) by the punch-through path from N+/PESD to PW. The proposed ESD N-LDMOS can sustain over 8KV HBM with low trigger behavior without extra area cost.",
"title": ""
},
{
"docid": "9fdb04de801698a56ebb9acf80e15109",
"text": "To cope with the increasing difference between processor and main memory speeds, modern computer systems use deep memory hierarchies. In the presence of such hierarchies, the performance attained by an application is largely determined by its memory reference behavior—if most references hit in the cache, the performance is significantly higher than if most references have to go to main memory. Frequently, it is possible for the programmer to restructure the data or code to achieve better memory reference behavior. Unfortunately, most existing performance debugging tools do not assist the programmer in this component of the overall performance tuning task.\nThis paper describes MemSpy, a prototype tool that helps programmers identify and fix memory bottlenecks in both sequential and parallel programs. A key aspect of MemSpy is that it introduces the notion of data oriented, in addition to code oriented, performance tuning. Thus, for both source level code objects and data objects, MemSpy provides information such as cache miss rates, causes of cache misses, and in multiprocessors, information on cache invalidations and local versus remote memory misses. MemSpy also introduces a concise matrix presentation to allow programmers to view both code and data oriented statistics at the same time. This paper presents design and implementation issues for MemSpy, and gives a detailed case study using MemSpy to tune a parallel sparse matrix application. It shows how MemSpy helps pinpoint memory system bottlenecks, such as poor spatial locality and interference among data structures, and suggests paths for improvement.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3",
"text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "d8a6dd65e7b0af45466aba2d7dcff317",
"text": "The aim of this paper is to analyze advanced solar dynamic space power systems for electrical space power generation. Space-based solar power [1] (SBSP) is a system for the collection of solar power in space, to meet the ever increasing demand for energy on Earth. SBSP differs from the usual method of solar power collection in the Earth. At the earth based solar power collection, array of panels are placed in the ground facing the sun, which collects sun’s energy during the day-time alone. In SBSP huge solar panels are fitted in the large satellite which collects the entire solar energy present in orbit and beams it down to Earth. In space, the collection of Sun’s energy is unaffected by the day/night cycle, weather, seasonal changes and the filtering effect of Earth’s atmospheric gases. A major interest in SBSP stems from the fact that solar collection panels can consistently be exposed to a high amount of solar radiation. SBSP offers a complete displacement of fossil fuel, nuclear and biological sources of energy. It is the only energy technology that is clean, renewable, constant and capable of providing power to virtually any location on Earth. KeywordsSpace-based solar power (SBSP), Solar power satellite (SPS), Rectifying Antenna (Rectanna)",
"title": ""
}
] |
scidocsrr
|
c0f9613a47a2e040b1107e75586a7d6c
|
Analysis of eye-tracking experiments performed on a Tobii T60
|
[
{
"docid": "dd6413e898cd84ba48b9c27564b1eb49",
"text": "The objective of the tutorial is to give an overview on how eye tracking is currently used and how it can be used as a method in human computer interaction research and especially in usability research. An eye tracking system records how the eyes move while a subject is completing a task for example on a web site. By analyzing these eye movements we are able to gain an objective insight into the behavior of that person.",
"title": ""
}
] |
[
{
"docid": "04e4c1b80bcf1a93cafefa73563ea4d3",
"text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.",
"title": ""
},
{
"docid": "c2fe863aba72df9df8405329c36046b6",
"text": "Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine (MVD-ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multiview learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.",
"title": ""
},
{
"docid": "a3256a02981c661f47bb498487bf601c",
"text": "Normative theorists of the public sphere, such as Jürgen Habermas, have been very critical of the ‘old’ mass media, which were seen as unable to promote free and plural societal communication. The advent of the internet, in contrast, gave rise to hopes that it would make previously marginalized actors and arguments more visible to a broader public. To assess these claims, this article compares the internet and mass media communication. It distinguishes three levels of both the offline and the online public sphere, which differ in their structural prerequisites, in their openness for participation and in their influence on the wider society. Using this model, the article compares the levels that are most strongly structured and most influential for the wider society: the mass media and communication as organized by search engines. Using human genome research and analysing Germany and the USA, the study looks at which actors, evaluations and frames are present in the print mass media and on websites, and finds that internet communication does not differ significantly from the offline debate in the print media.",
"title": ""
},
{
"docid": "2c8966dd8374df0fa9a9bfb15cd8fbbe",
"text": "Conversation agents present a challenging agenda for research and application. We describe the development, evaluation, and application of Baldi, a computer animated talking head. Baldi’s existence is justified by the important contribution of the face in spoken dialog. His actions are evaluated and modified to mimic natural actions as much as possible. Baldi has the potential to enrich human-machine interactions and serve as a tutor in a wide variety of educational domains. We describe one current application of language tutoring with children with hearing loss.",
"title": ""
},
{
"docid": "8d046c8468102edd57ba30d9d1992c55",
"text": "In this paper, we present a LinkNet-based architecture with SE-ResNeXt-50 encoder and a novel training strategy that strongly relies on image preprocessing and incorporating distorted network outputs. The architecture combines a pre-trained convolutional encoder and a symmetric expanding path that enables precise localization. We show that such a network can be trained on plain RGB images with a composite loss function and achieves competitive results on the DeepGlobe challenge on building extraction from satellite images",
"title": ""
},
{
"docid": "c7c40106a804061b96b6243cff85d317",
"text": "In this paper, we describe a system for detecting duplicate images and videos in a large collection of multimedia data. Our system consists of three major elements: Local-Difference-Pattern (LDP) as the unified feature to describe both images and videos, Locality-Sensitive-Hashing (LSH) as the core indexing structure to assure the most frequent data access occurred in the main memory, and multi-steps verification for queries to best exclude false positives and to increase the precision. The experimental results, validated on two public datasets, demonstrate that the proposed method is robust against the common image-processing tricks used to produce duplicates. In addition, the memory requirement has been addressed in our system to handle large-scale database.",
"title": ""
},
{
"docid": "ac24254a08f447f1090dc39f79298302",
"text": "The 3 most often-used performance measures in the cognitive and decision sciences are choice, response or decision time, and confidence. We develop a random walk/diffusion theory-2-stage dynamic signal detection (2DSD) theory-that accounts for all 3 measures using a common underlying process. The model uses a drift diffusion process to account for choice and decision time. To estimate confidence, we assume that evidence continues to accumulate after the choice. Judges then interrupt the process to categorize the accumulated evidence into a confidence rating. The model explains all known interrelationships between the 3 indices of performance. Furthermore, the model also accounts for the distributions of each variable in both a perceptual and general knowledge task. The dynamic nature of the model also reveals the moderating effects of time pressure on the accuracy of choice and confidence. Finally, the model specifies the optimal solution for giving the fastest choice and confidence rating for a given level of choice and confidence accuracy. Judges are found to act in a manner consistent with the optimal solution when making confidence judgments.",
"title": ""
},
{
"docid": "a7c9d58c49f1802b94395c6f12c2d6dd",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7077adc05163c63df2e7af5008a3c97",
"text": "In this paper, we present a methodology to understand GPU microarchitectural features and improve performance for compute-intensive kernels. The methodology relies on a reverse engineering approach to crack the GPU ISA encodings in order to build a GPU assembler. An assembly microbenchmark suite correlates microarchitectural features with their performance factors to uncover instruction-level and memory hierarchy preferences. We use SGEMM as a running example to show the ways to achieve bare-metal performance tuning. The performance boost is achieved by tuning FFMA throughput by activating dual-issue, eliminating register bank conflicts, adding non-FFMA instructions with little penalty, and choosing proper width of global/shared load instructions. On NVIDIA Kepler K20m, we develop a faster SGEMM with 3.1Tflop/s performance and 88% efficiency; the performance is 15% higher than cuBLAS7.0. Applying these optimizations to convolution, the implementation gains 39%-62% performance improvement compared with cuDNN4.0. The toolchain is an attempt to automatically crack different GPU ISA encodings and build an assembler adaptively for the purpose of performance enhancements to applications on GPUs.",
"title": ""
},
{
"docid": "6724f1e8a34a6d9f64a30061ce7f67c0",
"text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.",
"title": ""
},
{
"docid": "4c3e28c59bf205a32fce34d7ad7c665f",
"text": "Much of the world's supply of data is in the form of time series. In the last decade, there has been an explosion of interest in mining time series data. A number of new algorithms have been introduced to classify, cluster, segment, index, discover rules, and detect anomalies/novelties in time series. While these many different techniques used to solve these problems use a multitude of different techniques, they all have one common factor; they require some high level representation of the data, rather than the original raw data. These high level representations are necessary as a feature extraction step, or simply to make the storage, transmission, and computation of massive dataset feasible. A multitude of representations have been proposed in the literature, including spectral transforms, wavelets transforms, piecewise polynomials, eigenfunctions, and symbolic mappings. This chapter gives a high-level survey of time series Data Mining tasks, with an emphasis on time series representations.",
"title": ""
},
{
"docid": "aaaa90a881f6d52b02f14a05faa25f4e",
"text": "Studies on human motion have attracted a lot of attentions. Human motion capture data, which much more precisely records human motion than videos do, has been widely used in many areas. Motion segmentation is an indispensable step for many related applications, but current segmentation methods for motion capture data do not effectively model some important characteristics of motion capture data, such as Riemannian manifold structure and containing non-Gaussian noise. In this paper, we convert the segmentation of motion capture data into a temporal subspace clustering problem. Under the framework of sparse subspace clustering, we propose to use the geodesic exponential kernel to model the Riemannian manifold structure, use correntropy to measure the reconstruction error, use the triangle constraint to guarantee temporal continuity in each cluster and use multi-view reconstruction to extract the relations between different joints. Therefore, exploiting some special characteristics of motion capture data, we propose a new segmentation method, which is robust to non-Gaussian noise, since correntropy is a localized similarity measure. We also develop an efficient optimization algorithm based on block coordinate descent method to solve the proposed model. Our optimization algorithm has a linear complexity while sparse subspace clustering is originally a quadratic problem. Extensive experiment results both on simulated noisy data set and real noisy data set demonstrate the advantage of the proposed method.",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "6024570104e8791e7f916a6e0479819c",
"text": "This paper presents a new database suitable for both 2-D and 3-D face recognition based on photometric stereo (PS): the Photoface database. The database was collected using a custom-made four-source PS device designed to enable data capture with minimal interaction necessary from the subjects. The device, which automatically detects the presence of a subject using ultrasound, was placed at the entrance to a busy workplace and captured 1839 sessions of face images with natural pose and expression. This meant that the acquired data is more realistic for everyday use than existing databases and is, therefore, an invaluable test bed for state-of-the-art recognition algorithms. The paper also presents experiments of various face recognition and verification algorithms using the albedo, surface normals, and recovered depth maps. Finally, we have conducted experiments in order to demonstrate how different methods in the pipeline of PS (i.e., normal field computation and depth map reconstruction) affect recognition and verification performance. These experiments help to 1) demonstrate the usefulness of PS, and our device in particular, for minimal-interaction face recognition, and 2) highlight the optimal reconstruction and recognition algorithms for use with natural-expression PS data. The database can be downloaded from http://www.uwe.ac.uk/research/Photoface.",
"title": ""
},
{
"docid": "13c0a0dcae1d267b945f1dfb720db85c",
"text": "In order to process complex and large-scale graph data numerous distributed graph-parallel computing platforms have been proposed. However, excessive communications among computing nodes in these systems not only aggravate the network I/O workload of the underlying computing hardware systems but may also cause a decrease in runtime performance and scalability. In this paper, we propose and implement a system called Ligraph, which computes large-scale graph data in distributed mode with lightweight communication overhead. Ligraph is similar to PowerGraph system with three new features: (1) a Gather partial sum difference based computing model; (2) a corresponding lightweight Gather communication mechanism; (3) for PageRank-like algorithms Ligraph additionally employs a lightweight synchronizing communication mechanism and an edge direction-aware graph partition strategy proposed by our former work LightGraph, which is specially designed for PageRank-like algorithms. We have conducted extensive experiments using real-world data sets, and our results verified the effectiveness of Ligraph on reducing the communication overhead and improving the runtime performance and the scalability compared with PowerGraph and LightGraph. For example, compared with PowerGraph under Random partition scenario Ligraph can not only reduce up to 35.2 percent of the communication overhead but also cut up to 21.8 percent of the runtime for PageRank algorithm while processing Twitter data set. Our experiment results also demonstrate that compared with several other representative existing systems Ligraph also outperforms them in graph computing rate.",
"title": ""
},
{
"docid": "f2742f6876bdede7a67f4ec63d73ead9",
"text": "Momentum methods play a central role in optimization. Several momentum methods are provably optimal, and all use a technique called estimate sequences to analyze their convergence properties. The technique of estimate sequences has long been considered difficult to understand, leading many researchers to generate alternative, “more intuitive” methods and analyses. In this paper we show there is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time. This framework allows us to develop a simple and unified analysis of many existing momentum algorithms, introduce several new algorithms, and most importantly, strengthen the connection between algorithms and continuous-time dynamical systems.",
"title": ""
},
{
"docid": "27cf1715f4cf77f098ea4f64b690ff0d",
"text": "Existing mechanical circuit breakers can not satisfy the requirements of fast operation in power system due to noise, electric arc and long switching response time. Moreover the non-grid-connected wind power system is based on the Flexible Direct Current Transmission (FDCT) technique. It is especially necessary to research the Solid-State Circuit Breakers (SSCB) to realize the rapid and automatic control for the circuit breakers in the system. Meanwhile, the newly-developed Solid-State Circuit Breakers (SSCB) operating at the natural zero-crossing point of AC system is not suitable for a DC system. Based on the characteristics of the DC system, a novel circuit scheme has been proposed in this paper. The new scheme makes full use of ideology of soft-switching and current-commutation forced by resonance. This scheme successfully realizes the soft turn-on and fast turn-off. In this paper, the topology of current limiter is presented and analytical mathematical models are derived through comprehensive analysis. Finally, normal turn-on and turn-off experiments and overload delay protection test were conducted. The results show the reliability of the novel theory and feasibility of proposed topology. The proposed scheme can be applied in the grid-connected and non-grid-connected DC transmission and distribution systems.",
"title": ""
},
{
"docid": "99196d6c559d31f465ea5c64d165c283",
"text": "The United States Supreme Court recently ruled that execution by a commonly used protocol of drug administration does not represent cruel or unusual punishment. Various medical journals have editorialized on this drug protocol, the death penalty in general and the role that physicians play. Many physicians, and societies of physicians, express the opinion that it is unethical for doctors to participate in executions. This Target Article explores the harm that occurs to murder victims' relatives when an execution is delayed or indefinitely postponed. By using established principles in psychiatry and the science of the brain, it is shown that victims' relatives can suffer brain damage when justice is not done. Conversely, adequate justice can reverse some of those changes in the brain. Thus, physician opposition to capital punishment may be contributing to significant harm. In this context, the ethics of physician involvement in lethal injection is complex.",
"title": ""
},
{
"docid": "07f6caa52c73e87e68980c5d2ab75989",
"text": "Over the past decades, the rapid developments of the Internet and the information technologies have profoundly impacted every aspect of organizational and social activities. Many business organizations, including small and medium-sized enterprises (SMEs), have started to adopt business process digitalization (hereafter “BPD”) as a tool to gain market and operational efficiency (e.g., BarNir, Gallaugher & Auger, 2003; Bharadwaj & Soni, 2007; Johnston, Wade & McClean, 2007). Business process digitalization, in this study, is defined as abStract",
"title": ""
}
] |
scidocsrr
|
78ee5adf7c83ef050cd7c59c56bd370d
|
Business Process Analytics Using a Big Data Approach
|
[
{
"docid": "59a480f6e29c0e4919a8e26393b8eb8d",
"text": "monitoring: a proof-of-concept of event-driven business activity management. Structured Abstract: Purpose. The purpose of this paper is to show how to employ complex event processing (CEP) for the observation and management of business processes. It proposes a conceptual architecture of BPM event producer, processor, and consumer and describes technical implications for the application with standard software in a perfect order scenario. Design/methodology/approach. The authors discuss business process analytics as the technological background. The capabilities of CEP in a BPM context are outlined an architecture design is proposed. A sophisticated proof-of-concept demonstrates its applicability. Findings. The results overcome the separation and data latency issues of process controlling, monitoring, and simulation. Distinct analyses of past, present, and future blur into a holistic real-time approach. The authors highlight the necessity for configurable event producer in BPM engines, process event support in CEP engines, a common process event format, connectors to visualizers, notifiers and return channels to the BPM engine. Research limitations. Further research will thoroughly evaluate the approach in a variety of business settings. New concepts and standards for the architecture's building blocks will be needed to improve maintainability and operability. Practical implications. Managers learn how CEP can yield insights into business processes' operations. The paper illustrates a path to overcome inflexibility, latency, and missing feedback mechanisms of current process modeling and control solutions. Software vendors might be interested in the conceptualization and the described needs for further development. Originality/value. So far, there is no commercial CEP-based BPM solution which facilitates a round trip from insight to action as outlines. As major software vendors have begun developing solutions (BPM/BPA solutions), this paper will stimulate a debate between research and practice on suitable design and technology.",
"title": ""
},
{
"docid": "57ca7842e7ab21b51c4069e76121fc26",
"text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.",
"title": ""
}
] |
[
{
"docid": "bd518e3748cc5af8d7ef5b686d2f3c5b",
"text": "Authorship identification is the task of identifying the author of a given text from a set of suspects. The main concern of this task is to define an appropriate characterization of texts that captures the writing style of authors. Although deep learning was recently used in different natural language processing tasks, it has not been used in author identification (to the best of our knowledge). In this paper, deep learning is used for feature extraction of documents represented using variable size character n-grams. We apply A Stacked Denoising Auto-Encoder (SDAE) for extracting document features with different settings, and then a support vector machine classifier is used for classification. The results show that the proposed system outperforms its counterparts.",
"title": ""
},
{
"docid": "709707d1ca7155380743335a288aabe4",
"text": "Following the onset of maturation, female athletes have a significantly higher risk for anterior cruciate ligament (ACL) injury compared with male athletes. While multiple sex differences in lower-extremity neuromuscular control and biomechanics have been identified as potential risk factors for ACL injury in females, the majority of these studies have focused specifically on the knee joint. However, increasing evidence in the literature indicates that lumbo-pelvic (core) control may have a large effect on knee-joint control and injury risk. This review examines the published evidence on the contributions of the trunk and hip to knee-joint control. Specifically, the sex differences in potential proximal controllers of the knee as risk factors for ACL injury are identified and discussed. Sex differences in trunk and hip biomechanics have been identified in all planes of motion (sagittal, coronal and transverse). Essentially, female athletes show greater lateral trunk displacement, altered trunk and hip flexion angles, greater ranges of trunk motion, and increased hip adduction and internal rotation during sport manoeuvres, compared with their male counterparts. These differences may increase the risk of ACL injury among female athletes. Prevention programmes targeted towards trunk and hip neuromuscular control may decrease the risk for ACL injuries.",
"title": ""
},
{
"docid": "c0ee14083f779e3f4115f8b5fd822f67",
"text": "The booming popularity of smartphones is partly a result of application markets where users can easily download wide range of third-party applications. However, due to the open nature of markets, especially on Android, there have been several privacy and security concerns with these applications. On Google Play, as with most other markets, users have direct access to natural-language descriptions of those applications, which give an intuitive idea of the functionality including the security-related information of those applications. Google Play also provides the permissions requested by applications to access security and privacy-sensitive APIs on the devices. Users may use such a list to evaluate the risks of using these applications. To best assist the end users, the descriptions should reflect the need for permissions, which we term description-to-permission fidelity. In this paper, we present a system AutoCog to automatically assess description-to-permission fidelity of applications. AutoCog employs state-of-the-art techniques in natural language processing and our own learning-based algorithm to relate description with permissions. In our evaluation, AutoCog outperforms other related work on both performance of detection and ability of generalization over various permissions by a large extent. On an evaluation of eleven permissions, we achieve an average precision of 92.6% and an average recall of 92.0%. Our large-scale measurements over 45,811 applications demonstrate the severity of the problem of low description-to-permission fidelity. AutoCog helps bridge the long-lasting usability gap between security techniques and average users.",
"title": ""
},
{
"docid": "f56bac3cb4ea99626afa51907e909fa3",
"text": "An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.",
"title": ""
},
{
"docid": "27beef0016282d21eeb95c0f830c6fc2",
"text": "Static analysis has been successfully used in many areas, from verifying mission-critical software to malware detection. Unfortunately, static analysis often produces false positives, which require significant manual effort to resolve. In this paper, we show how to overlay a probabilistic model, trained using domain knowledge, on top of static analysis results, in order to triage static analysis results. We apply this idea to analyzing mobile applications. Android application components can communicate with each other, both within single applications and between different applications. Unfortunately, techniques to statically infer Inter-Component Communication (ICC) yield many potential inter-component and inter-application links, most of which are false positives. At large scales, scrutinizing all potential links is simply not feasible. We therefore overlay a probabilistic model of ICC on top of static analysis results. Since computing the inter-component links is a prerequisite to inter-component analysis, we introduce a formalism for inferring ICC links based on set constraints. We design an efficient algorithm for performing link resolution. We compute all potential links in a corpus of 11,267 applications in 30 minutes and triage them using our probabilistic approach. We find that over 95.1% of all 636 million potential links are associated with probability values below 0.01 and are thus likely unfeasible links. Thus, it is possible to consider only a small subset of all links without significant loss of information. This work is the first significant step in making static inter-application analysis more tractable, even at large scales.",
"title": ""
},
{
"docid": "8589f7b0b2d1cbea479e97b0aa6b1498",
"text": "Distributed publish/subscribe systems are naturally suited for processing events in distributed systems. However, support for expressing patterns about distributed events and algorithms for detecting correlations among these events are still largely unexplored. Inspired from the requirements of decentralized, event-driven workflow processing, we design a subscription language for expressing correlations among distributed events. We illustrate the potential of our approach with a workflow management case study. The language is validated and implemented in PADRES. In this paper we present an overview of PADRES, highlighting some of its novel features, including the composite subscription language, the coordination patterns, the composite event detection algorithms, the rule-based router design, and a detailed case study illustrating the decentralized processing of workflows. Our experimental evaluation shows that rule-based brokers are a viable and powerful alternative to existing, special-purpose, content-based routing algorithms. The experiments also show that the use of composite subscriptions in PADRES significantly reduces the load on the network. Complex workflows can be processed in a decentralized fashion with a gain of 40% in message dissemination cost. All processing is realized entirely in the publish/subscribe paradigm.",
"title": ""
},
{
"docid": "7a72f69ad4926798e12f6fa8e598d206",
"text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"title": ""
},
{
"docid": "3240607824a6dace92925e75df92cc09",
"text": "We propose a framework to model general guillotine restrictions in two-dimensional cutting problems formulated as Mixed Integer Linear Programs (MIP). The modeling framework requires a pseudo-polynomial number of variables and constraints, which can be effectively enumerated for medium-size instances. Our modeling of general guillotine cuts is the first one that, once it is implemented within a state-of-the-art MIP solver, can tackle instances of challenging size. We mainly concentrate our analysis on the Guillotine Two Dimensional Knapsack Problem (G2KP), for which a model, and an exact procedure able to significantly improve the computational performance, are given. We also show how the modeling of general guillotine cuts can be extended to other relevant problems such as the Guillotine Two Dimensional Cutting Stock Problem (G2CSP) and the Guillotine Strip Packing Problem (GSPP). Finally, we conclude the paper discussing an extensive set of computational experiments on G2KP and GSPP benchmark instances from the literature.",
"title": ""
},
{
"docid": "c13aff70c3b080cfd5d374639e5ec0e9",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "718cf9a405a81b9a43279a1d02f5e516",
"text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.",
"title": ""
},
{
"docid": "a2f4005c681554cc422b11a6f5087793",
"text": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee · S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: lsh@ece.skku.ac.kr S. Lee e-mail: seongsu.lee@lge.com S. Lee · S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: seungmin2.baek@lge.com an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.",
"title": ""
},
{
"docid": "c6dc69296f1cf7b4c86f5f9bcd1bea97",
"text": "A singly fed, electrically small, planar antenna that generates a quasi-isotropic radiation pattern is investigated. The antenna consists of a folded dipole, a pair of capacitively loaded loops (CLLs), and a coplanar stripline (CPS), which are printed on the top and bottom surfaces of a single-layer printed circuit board. Through near-field coupling with the driven CPS, the folded dipole and CLLs are both effectively excited and behave like an electric dipole and a magnetic dipole, respectively. A quasi-isotropic radiation pattern can therefore be obtained by combining the two orthogonal dipoles with the same radiation intensities and quadrature phases. To verify the idea, a prototype operating at 2.4 GHz is designed, fabricated, and measured. It has been shown that this electrically small antenna (0.165 × 0.164 × 0.006 λ3, ka = 0.73) has a −10 dB impedance bandwidth of 0.99%, a total efficiency of ∼90%, and a nearly isotropic pattern with the difference between the maximum and minimum radiated power densities given by ∼3 dB over the entire spherical radiating surface.",
"title": ""
},
{
"docid": "7350c0433fe1330803403e6aa03a2f26",
"text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.",
"title": ""
},
{
"docid": "72f59a5342e3dc9d9c038fae8b9d4844",
"text": "Borromean rings or links are topologically complex assemblies of three entangled rings where no two rings are interlinked in a chain-like catenane, yet the three rings cannot be separated. We report here a metallacycle complex whose crystalline network forms the first example of a new class of entanglement. The complex is formed from the self-assembly of CuBr2 with the cyclotriveratrylene-scaffold ligand (±)-tris(iso-nicotinoyl)cyclotriguaiacylene. Individual metallacycles are interwoven into a two-dimensional chainmail network where each metallacycle exhibits multiple Borromean-ring-like associations with its neighbours. This only occurs in the solid state, and also represents the first example of a crystalline infinite chainmail two-dimensional network. Crystals of the complex were twinned and have an unusual hollow tubular morphology that is likely to result from a localized dissolution-recrystallization process.",
"title": ""
},
{
"docid": "56b2d8ffe74108d5b757c62eb7a7d31d",
"text": "Multi-label classification is an important machine learning task wherein one assigns a subset of candidate labels to an object. In this paper, we propose a new multi-label classification method based on Conditional Bernoulli Mixtures. Our proposed method has several attractive properties: it captures label dependencies; it reduces the multi-label problem to several standard binary and multi-class problems; it subsumes the classic independent binary prediction and power-set subset prediction methods as special cases; and it exhibits accuracy and/or computational complexity advantages over existing approaches. We demonstrate two implementations of our method using logistic regressions and gradient boosted trees, together with a simple training procedure based on Expectation Maximization. We further derive an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets. Experimental results show the effectiveness of the proposed method against competitive alternatives on benchmark datasets.",
"title": ""
},
{
"docid": "1323c06ef61451c87e302939a3b0d4bd",
"text": "BACKGROUND\nLean and Six Sigma are improvement methodologies developed in the manufacturing industry and have been applied to healthcare settings since the 1990 s. They use a systematic and reproducible approach to provide Quality Improvement (QI), with a flexible process that can be applied to a range of outcomes across different patient groups. This review assesses the literature with regard to the use and utility of Lean and Six Sigma methodologies in surgery.\n\n\nMETHODS\nMEDLINE, Embase, PsycINFO, Allied and Complementary Medicine Database, British Nursing Index, Cumulative Index to Nursing and Allied Health Literature, Health Business Elite and the Health Management Information Consortium were searched in January 2014. Experimental studies were included if they assessed the use of Lean or Six Sigma on the ability to improve specified outcomes in surgical patients.\n\n\nRESULTS\nOf the 124 studies returned, 23 were suitable for inclusion with 11 assessing Lean, 6 Six Sigma and 6 Lean Six Sigma. The broad range of outcomes can be collated into six common aims: to optimise outpatient efficiency, to improve operating theatre efficiency, to decrease operative complications, to reduce ward-based harms, to reduce mortality and to limit unnecessary cost and length of stay. The majority of studies (88%) demonstrate improvement; however high levels of systematic bias and imprecision were evident.\n\n\nCONCLUSION\nLean and Six Sigma QI methodologies have the potential to produce clinically significant improvement for surgical patients. However there is a need to conduct high-quality studies with low risk of systematic bias in order to further understand their role.",
"title": ""
},
{
"docid": "6cc99565a0e9081a94e82be93a67482e",
"text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.",
"title": ""
},
{
"docid": "31b449b209beaadbbcc36c485517c3cf",
"text": "While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.",
"title": ""
},
{
"docid": "17287942eaf5c590b0d48b73eac7bc7c",
"text": "The successof the Particle Swarm Optimization (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) hasmotivated researchers to extend the useof this bioinspired techniqueto other areas.One of them is multiobjective optimization. Despite the fact that the first proposalof a Multi-Objecti veParticle SwarmOptimizer (MOPSO) is over six years old, a considerable number of other algorithms have beenproposedsincethen. This paper presentsa comprehensi ve review of the various MOPSOsreported in the specializedliteratur e. As part of this review, we include a classificationof the approaches,and weidentify the main featuresof eachproposal. In the last part of the paper, we list someof the topicswithin this field that weconsideraspromisingareasof futur e research.",
"title": ""
},
{
"docid": "741e0f73b414b5eef1ce44bbfdb33646",
"text": "Organizing Web services into functionally similar clusters, is an efficient approach to discovering Web services efficiently. An important aspect of the clustering process is calculating the semantic similarity of Web services. Most current clustering approaches are based on similarity-distance measurement, including keyword, ontology and information-retrieval-based methods. Problems with these approaches include a shortage of high quality ontologies and a loss of semantic information. In addition, there has been little fine-grained improvement in existing approaches to service clustering. In this paper, we present a new approach to grouping Web services into functionally similar clusters by mining Web service documents and generating an ontology via hidden semantic patterns present within the complex terms used in service features to measure similarity. If calculating the similarity using the generated ontology fails, the similarity is calculated by using an information-retrieval-based term-similarity method that adopts term-similarity measuring techniques used by thesaurus and search engines. Another important aspect of high performance in clustering is identifying the most suitable cluster center. To improve the utility of clusters, we propose an approach to identifying the cluster center that combines service similarity with the term frequency-inverse document frequency values of service names. Experimental results show that our clustering approach performs better than existing approaches.",
"title": ""
}
] |
scidocsrr
|
384677e7af84cc597cb9bcd78eaacd91
|
Vision and GPS-based autonomous vehicle navigation using templates and artificial neural networks
|
[
{
"docid": "264521c7fa8f281f0f72484e8dad4de0",
"text": "Autonomous navigation is a fundamental task in mobile robotics. In the last years, several approaches have been addressing the autonomous navigation in outdoor environments. Lately it has also been extended to robotic vehicles in urban environments. This paper presents a vehicle control system capable of learning behaviors based on examples from human driver and analyzing different levels of memory of the templates, which are an important capability to autonomous vehicle drive. Our approach is based on image processing, template matching classification, finite state machine, and template memory. The proposed system allows training an image segmentation algorithm and a neural network to work with levels of memory of the templates in order to identify navigable and non-navigable regions. As an output, it generates the steering control and speed for the Intelligent Robotic Car for Autonomous Navigation (CaRINA). Several experimental tests have been carried out under different environmental conditions to evaluate the proposed techniques.",
"title": ""
}
] |
[
{
"docid": "f86d2e40eabe4067da73070db337d9ce",
"text": "Despite tremendous efforts to develop stimuli-responsive enzyme delivery systems, their efficacy has been mostly limited to in vitro applications. Here we introduce, by using an approach of combining biomolecules with artificial compartments, a biomimetic strategy to create artificial organelles (AOs) as cellular implants, with endogenous stimuli-triggered enzymatic activity. AOs are produced by inserting protein gates in the membrane of polymersomes containing horseradish peroxidase enzymes selected as a model for natures own enzymes involved in the redox homoeostasis. The inserted protein gates are engineered by attaching molecular caps to genetically modified channel porins in order to induce redox-responsive control of the molecular flow through the membrane. AOs preserve their structure and are activated by intracellular glutathione levels in vitro. Importantly, our biomimetic AOs are functional in vivo in zebrafish embryos, which demonstrates the feasibility of using AOs as cellular implants in living organisms. This opens new perspectives for patient-oriented protein therapy. The efficacy of stimuli-responsive enzyme delivery systems is usually limited to in vitro applications. Here the authors form artificial organelles by inserting stimuli-responsive protein gates in membranes of polymersomes loaded with enzymes and obtain a triggered functionality both in vitro and in vivo.",
"title": ""
},
{
"docid": "cc379f31d87bce8ec46829f227458059",
"text": "In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.",
"title": ""
},
{
"docid": "1996fa0ce1c4dcf45c160bc0c2ebe403",
"text": "In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration.",
"title": ""
},
{
"docid": "85cf57239ae6aed49877e150d3231b43",
"text": "A novel attack model is proposed against the existing wireless link-based source identification, which classifies packet sources according to the physical-layer link signatures. A link signature is believed to be a more reliable indicator than an IP or MAC address for identifying packet source, as it is generally harder to modify/forge. It is therefore expected to be a future authentication against impersonation and DoS attacks. However, if an attacker is equipped with the same capability/hardware as the authenticator to process physical-layer signals, a link signature can be easily manipulated by any nearby wireless device during the training phase. Based on this finding, we propose an attack model, called the analog man-in-the-middle (AMITM) attack, which utilizes the latest full-duplex relay technology to inject semi-controlled link signatures into authorized packets and reproduce the injected signature in the fabricated packets. Our experimental evaluation shows that with a proper parameter setting, 90% of fabricated packets are classified as those sent from an authorized transmitter. A countermeasure against this new attack is also proposed for the authenticator to inject link-signature noise by the same attack methodology.",
"title": ""
},
{
"docid": "5ea366c59a6cd57ac2311a027084b566",
"text": "Shape changing interfaces give physical shapes to digital data so that users can feel and manipulate data with their hands and bodies. However, physical objects in our daily life not only have shape but also various material properties. In this paper, we propose an interaction technique to represent material properties using shape changing interfaces. Specifically, by integrating the multi-modal sensation techniques of haptics, our approach builds a perceptive model for the properties of deformable materials in response to direct manipulation. As a proof-of-concept prototype, we developed preliminary physics algorithms running on pin-based shape displays. The system can create computationally variable properties of deformable materials that are visually and physically perceivable. In our experiments, users identify three deformable material properties (flexibility, elasticity and viscosity) through direct touch interaction with the shape display and its dynamic movements. In this paper, we describe interaction techniques, our implementation, future applications and evaluation on how users differentiate between specific properties of our system. Our research shows that shape changing interfaces can go beyond simply displaying shape allowing for rich embodied interaction and perceptions of rendered materials with the hands and body.",
"title": ""
},
{
"docid": "a0d2ea9b5653d6ca54983bb3d679326e",
"text": "A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct timestep. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, that is, that they serve to (1) derive all salient information and (2) preserve the consistency of the belief set. This article illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.",
"title": ""
},
{
"docid": "486bd67781bb1067aa4ff6009cdeecb7",
"text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR = 4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR = 1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.",
"title": ""
},
{
"docid": "9ce5d15c444d91f8db50a781f438fa29",
"text": "In this paper, we explore the relationship between Facebook users’ privacy concerns, relationship maintenance strategies, and social capital outcomes. Previous research has found a positive relationship between various measures of Facebook use and perceptions of social capital, i.e., one’s access to social and information-based resources. Other research has found that social network site users with high privacy concerns modify their disclosures on the site. However, no research to date has empirically tested how privacy concerns and disclosure strategies interact to influence social capital outcomes. To address this gap in the literature, we explored these questions with survey data (N=230). Findings indicate that privacy concerns and behaviors predict disclosures on Facebook, but not perceptions of social capital. In addition, when looking at predictors of social capital, we identify interaction effects between users’ network composition and their use of privacy features.",
"title": ""
},
{
"docid": "cb33570878c6c66601fb0c73b148a6f3",
"text": "Für die automatisierte Bewertung von Lösungen zu Programmieraufgaben wurde mittlerweile eine Vielzah an Grader-Programmen zu unterschiedlichen Programmiersprachen entwickelt. U m Lernenden wie Lehrenden Zugang zur möglichst vielen Gradern über das gewohn te LMS zu ermöglichen wird das Konzept einer generischen Web-Serviceschni ttstelle (Grappa) vorgestellt, welches im Kontext einer Lehrveranstaltung evaluier t wurde.",
"title": ""
},
{
"docid": "eaf2a943ca3cf2b837eb5c1cae29a37a",
"text": "The natural immune system is a subject of great research interest because of its powerful information processing capabilities. From an informationprocessing perspective, the immune system is a highly parallel system. It provides an excellent model of adaptive processes operating at the local level and of useful behavior emerging at the global level. Moreover, it uses learning, memory, and assodative retrieval to salve recognition and classification tasks. This chapter illustrates different immunological mechanisms and their relation to information processing, and provides an overview of the rapidly emerging field called Artificial Immune Systems. These techniques have been successfully used in pattern recognition, fault detection and diagnosis, computer security, and a variety of other applications.",
"title": ""
},
{
"docid": "b44c6f387fb8ae7084854e0eca27a6fa",
"text": "Static memory management replaces runtime garbage collection with compile-time annotations that make all memory allocation and deallocation explicit in a program. We improve upon the Tofte/Talpin region-based scheme for compile-time memory management[TT94]. In the Tofte/Talpin approach, all values, including closures, are stored in regions. Region lifetimes coincide with lexical scope, thus forming a runtime stack of regions and eliminating the need for garbage collection. We relax the requirement that region lifetimes be lexical. Rather, regions are allocated late and deallocated as early as possible by explicit memory operations. The placement of allocation and deallocation annotations is determined by solving a system of constraints that expresses all possible annotations. Experiments show that our approach reduces memory requirements significantly, in some cases asymptotically.",
"title": ""
},
{
"docid": "5cb5698cd97daa9da2f94f88dc59e8e7",
"text": "Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds.\n In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.",
"title": ""
},
{
"docid": "93447fe368f3d9b8361531ae4c53e082",
"text": "This document has the purpose to present some results obtained in the pilot experience, play and program with Bee-Bot. The activities were developed in the framework of the doctoral research project whose purpose is the design and integration of learning activities with robotics to foster programming skills and computational thinking in the classroom of early childhood. Teachers and students of the second cycle of early childhood education of a concerted school participated in the experience during 2016-2017 academic period. School is in Salamanca, Spain. The activity allowed students to solve programming challenges using the Bee-Bot floor robot. Instruments were used to collect data, such as: questionnaires, interviews, rubrics and field diary. In general terms, the results obtained were positive. The technical, pedagogical and social aspects proposed in this research have received the favorable acceptance of teachers and students. Therefore, the information generated allowed to strengthen the design, structure and evaluation of the robotics program would be used in later stages of the investigation.",
"title": ""
},
{
"docid": "c971a7ced186851f370e0cc6b490a139",
"text": "Point-of-Interest (POI) recommendation has become an important means to help people discover attractive and interesting locations, especially when users travel out of town. However, extreme sparsity of user-POI matrix creates a severe challenge. To cope with this challenge, a growing line of research has exploited the temporal effect, geographical-social influence, content effect and word-of-mouth effect. However, current research lacks an integrated analysis of the joint effect of the above factors to deal with the issue of data-sparsity, especially in the out-of-town recommendation scenario which has been ignored by most existing work.\n In light of the above, we propose a joint probabilistic generative model to mimic user check-in behaviors in a process of decision making, which strategically integrates the above factors to effectively overcome the data sparsity, especially for out-of-town users. To demonstrate the applicability and flexibility of our model, we investigate how it supports two recommendation scenarios in a unified way, i.e., home-town recommendation and out-of-town recommendation. We conduct extensive experiments to evaluate the performance of our model on two real large-scale datasets in terms of both recommendation effectiveness and efficiency, and the experimental results show its superiority over other competitors.",
"title": ""
},
{
"docid": "e35282992a1f5ad3cd4677fb3b35cbed",
"text": "We investigate the use of two visual descriptors: Local Binary Patterns-Three Orthogonal Planes(LBP-TOP) and Dense Trajectories for depression assessment on the AVEC 2014 challenge dataset. We encode the visual information generated by the two descriptors using Fisher Vector encoding which has been shown to be one of the best performing methods to encode visual data for image classification. We also incorporate audio features in the final system to introduce multiple input modalities. The results produced using Linear Support Vector regression outperform the baseline method.",
"title": ""
},
{
"docid": "dc424d2dc407e504d962c557325f035e",
"text": "Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.",
"title": ""
},
{
"docid": "26b0038c375eaa619ff584360f401674",
"text": "We examine the code base of the OpenBSD operating system to determine whether its security is increasing over time. We measure the rate at which new code has been introduced and the rate at which vulnerabilities have been reported over the last 7.5 years and fifteen versions. We learn that 61% of the lines of code in today’s OpenBSD are foundational: they were introduced prior to the release of the initial version we studied and have not been altered since. We also learn that 62% of reported vulnerabilities were present when the study began and can also be considered to be foundational. We find strong statistical evidence of a decrease in the rate at which foundational vulnerabilities are being reported. However, this decrease is anything but brisk: foundational vulnerabilities have a median lifetime of at least 2.6 years. Finally, we examined the density of vulnerabilities in the code that was altered/introduced in each version. The densities ranged from 0 to 0.033 vulnerabilities reported per thousand lines of code. These densities will increase as more vulnerabilities are reported. ∗This work is sponsored by the I3P under Air Force Contract FA8721-05-0002. Opinions, interpretations, conclusions and recommendations are those of the author(s) and are not necessarily endorsed by the United States Government. †This work was produced under the auspices of the Institute for Information Infrastructure Protection (I3P) research program. The I3P is managed by Dartmouth College, and supported under Award number 2003-TK-TX-0003 from the U.S. Department of Homeland Security, Science and Technology Directorate. Points of view in this document are those of the authors and do not necessarily represent the official position of the U.S. Department of Homeland Security, the Science and Technology Directorate, the I3P, or Dartmouth College. ‡Currently at the University of Cambridge",
"title": ""
},
{
"docid": "58ba2ac85d041626d6fe361bd0578c2f",
"text": "This paper concerns open-world classification, where the classifier not only needs to classify test examples into seen classes that have appeared in training but also reject examples from unseen or novel classes that have not appeared in training. Specifically, this paper focuses on discovering the hidden unseen classes of the rejected examples. Clearly, without prior knowledge this is difficult. However, we do have the data from the seen training classes, which can tell us what kind of similarity/difference is expected for examples from the same class or from different classes. It is reasonable to assume that this knowledge can be transferred to the rejected examples and used to discover the hidden unseen classes in them. This paper aims to solve this problem. It first proposes a joint open classification model with a sub-model for classifying whether a pair of examples belongs to the same or different classes. This sub-model can serve as a distance function for clustering to discover the hidden classes of the rejected examples. Experimental results show that the proposed model is highly promising.",
"title": ""
},
{
"docid": "5260bef54f0499fbf7c4edc416c75f89",
"text": "The security of computer systems has become essential especially in front of the critical issues of cyber-attacks that can result in the compromise of these systems because any act on a system intends to harm one of the security properties (Confidentiality, Integrity and availability). Studying computer attacks is also essential to design models of attacks aiming to protect against these attacks by modeling the last ones. The development of taxonomies leads to characterize and classify the attacks which lead to understand them. In computer's security taxonomy, we can have two broad categories: cyber-attack's taxonomy and cyber-security taxonomy. In this paper, we propose taxonomy for cyber-attacks according to an attacker vision and the aspect of achieving an attack. This taxonomy is based on 4 dimensions: Attack vector, Result, Type and Target. To generalize our approach, we have used the framework of the Discrete EVent system Specification DEVS. This framework depicts the overall vision of cyber-attacks. To partially validate our work, a simulation is done on a case study of buffer overflow. A DEVS model is described and a simulation is done via this formalism. This case study aims to reinforce our proposal.",
"title": ""
},
{
"docid": "452a0765f74fd4301938fb8461cce563",
"text": "Falls are the primary cause of accidents among the elderly and frequently cause fatal and non-fatal injuries associated with a large amount of medical costs. Fall detection using wearable wireless sensor nodes has the potential of improving elderly telecare. This investigation proposes a ZigBee-based location-aware fall detection system for elderly telecare that provides an unobstructed communication between the elderly and caregivers when falls happen. The system is based on ZigBee-based sensor networks, and the sensor node consists of a motherboard with a tri-axial accelerometer and a ZigBee module. A wireless sensor node worn on the waist continuously detects fall events and starts an indoor positioning engine as soon as a fall happens. In the fall detection scheme, this study proposes a three-phase threshold-based fall detection algorithm to detect critical and normal falls. The fall alarm can be canceled by pressing and holding the emergency fall button only when a normal fall is detected. On the other hand, there are three phases in the indoor positioning engine: path loss survey phase, Received Signal Strength Indicator (RSSI) collection phase and location calculation phase. Finally, the location of the faller will be calculated by a k-nearest neighbor algorithm with weighted RSSI. The experimental results demonstrate that the fall detection algorithm achieves 95.63% sensitivity, 73.5% specificity, 88.62% accuracy and 88.6% precision. Furthermore, the average error distance for indoor positioning is 1.15 ± 0.54 m. The proposed system successfully delivers critical information to remote telecare providers who can then immediately help a fallen person.",
"title": ""
}
] |
scidocsrr
|
28d3b4dc16f47c32f28420a2dadd1e5e
|
Gorillas in our midst: sustained inattentional blindness for dynamic events.
|
[
{
"docid": "e997f8468d132f1e28e0d6a8801f6fb1",
"text": "Change-blindness, occurs when large changes are missed under natural viewing conditions because they occur simultaneously with a brief visual disruption, perhaps caused by an eye movement,, a flicker, a blink, or a camera cut in a film sequence. We have found that this can occur even when the disruption does not cover or obscure the changes. When a few small, high-contrast shapes are briefly spattered over a picture, like mudsplashes on a car windscreen, large changes can be made simultaneously in the scene without being noticed. This phenomenon is potentially important in driving, surveillance or navigation, as dangerous events occurring in full view can go unnoticed if they coincide with even very small, apparently innocuous, disturbances. It is also important for understanding how the brain represents the world.",
"title": ""
}
] |
[
{
"docid": "4aa1e87816ea5850339611d242edb1f4",
"text": "A scientific understanding of emotion experience requires information on the contexts in which the emotion is induced. Moreover, as one of the primary functions of music is to regulate the listener's mood, the individual's short-term music preference may reveal the emotional state of the individual. In light of these observations, this paper presents the first scientific study that exploits the online repository of social data to investigate the connections between a blogger's emotional state, user context manifested in the blog articles, and the content of the music titles the blogger attached to the post. A number of computational models are developed to evaluate the accuracy of different content or context cues in predicting emotional state, using 40,000 pieces of music listening records collected from the social blogging website LiveJournal. Our study shows that it is feasible to computationally model the latent structure underlying music listening and mood regulation. The average area under the receiver operating characteristic curve (AUC) for the content-based and context-based models attains 0.5462 and 0.6851, respectively. The association among user mood, music emotion, and individual's personality is also identified.",
"title": ""
},
{
"docid": "30ffdf90936f4b3c8feba45ae1449691",
"text": "Abstract Given a graph with node attributes, what neighborhoods1 are anomalous? To answer this question, one needs a quality score that utilizes both structure and attributes. Popular existing measures either quantify the structure only and ignore the attributes (e.g., conductance), or only consider the connectedness of the nodes inside the neighborhood and ignore the cross-edges at the boundary (e.g., density). In this work we propose normality, a new quality measure for attributed neighborhoods. Normality utilizes structure and attributes together to quantify both internal consistency and external separability. It exhibits two key advantages over other measures: (1) It allows many boundaryedges as long as they can be “exonerated”; i.e., either (i) are expected under a null model, and/or (ii) the boundary nodes do not exhibit the subset of attributes shared by the neighborhood members. Existing measures, in contrast, penalize boundary edges irrespectively. (2) Normality can be efficiently maximized to automatically infer the shared attribute subspace (and respective weights) that characterize a neighborhood. This efficient optimization allows us to process graphs with millions of attributes. We capitalize on our measure to present a novel approach for Anomaly Mining of Entity Neighborhoods (AMEN). Experiments on real-world attributed graphs illustrate the effectiveness of our measure at anomaly detection, outperforming popular approaches including conductance, density, OddBall, and SODA. In addition to anomaly detection, our qualitative analysis demonstrates the utility of normality as a powerful tool to contrast the correlation between structure and attributes across different graphs.",
"title": ""
},
{
"docid": "e79a335fb5dc6e2169484f8ac4130b35",
"text": "We obtained expressions for TE and TM modes of the planar hyperbolic secant (HS) waveguide. We found waveguide parameters for which the fundamental mode has minimal width. By FDTD-simulation we show propagation of TE-modes and periodical reconstruction of non-modal fields in bounded HS-waveguides. We show that truncated HS-waveguide focuses plane wave into spot with diameter 0.132 of wavelength.",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "3b1addbef50c5020b88ae2e55c197085",
"text": "In this paper, we present a novel wide-band envelope detector comprising a fully-differential operational transconductance amplifier (OTA), a full-wave rectifier and a peak detector. To enhance the frequency performance of the envelop detector, we utilize a gyrator-C active inductor load in the OTA for wider bandwidth. Additionally, it is shown that the high-speed rectifier of the envelope detector requires high bias current instead of the sub-threshold bias condition. The experimental results show that the proposed envelope detector can work from 100-Hz to 1.6-GHz with an input dynamic range of 50-dB at 100-Hz and 40-dB at 1.6-GHz, respectively. The envelope detector was fabricated on the TSMC 0.18-um CMOS process with an active area of 0.652 mm2.",
"title": ""
},
{
"docid": "4162c6bbaac397ff24e337fa4af08abd",
"text": "We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "226392eec365706465eb9937b07f16b1",
"text": "Current evidence suggests that all of the major events in hominin evolution have occurred in East Africa. Over the last two decades, there has been intensive work undertaken to understand African palaeoclimate and tectonics in order to put together a coherent picture of how the environment of East Africa has varied in the past. The landscape of East Africa has altered dramatically over the last 10 million years. It has changed from a relatively flat, homogenous region covered with mixed tropical forest, to a varied and heterogeneous environment, with mountains over 4 km high and vegetation ranging from desert to cloud forest. The progressive rifting of East Africa has also generated numerous lake basins, which are highly sensitive to changes in the local precipitation-evaporation regime. There is now evidence that the presence of precession-driven, ephemeral deep-water lakes in East Africa were concurrent with major events in hominin evolution. It seems the unusual geology and climate of East Africa created periods of highly variable local climate, which, it has been suggested could have driven hominin speciation, encephalisation and dispersal out of Africa. One example is the significant hominin speciation and brain expansion event at ~1.8 Ma that seems to have been coeval with the occurrence of highly variable, extensive, deep-water lakes. This complex, climatically very variable setting inspired first the variability selection hypothesis, which was then the basis for the pulsed climate variability hypothesis. The newer of the two suggests that the long-term drying trend in East Africa was punctuated by episodes of short, alternating periods of extreme humidity and aridity. Both hypotheses, together with other key theories of climate-evolution linkages, are discussed in this paper. Though useful the actual evolution mechanisms, which led to early hominins are still unclear and continue to be debated. However, it is clear that an understanding of East African lakes and their palaeoclimate history is required to understand the context within which humans evolved and eventually left East Africa. © 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/).",
"title": ""
},
{
"docid": "46326a60018e55397ecdc23a67afdc01",
"text": "Human communication includes information, opinions and reactions. Reactions are often captured by the affective-messages in written as well as verbal communications. While there has been work in affect modeling and to some extent affective content generation, the area of affective word distributions is not well studied. Synsets and lexica capture semantic relationships across words. These models, however, lack in encoding affective or emotional word interpretations. Our proposed model, Aff2Vec, provides a method for enriched word embeddings that are representative of affective interpretations of words. Aff2Vec outperforms the state-of-the-art in intrinsic word-similarity tasks. Further, the use of Aff2Vec representations outperforms baseline embeddings in downstream natural language understanding tasks including sentiment analysis, personality detection, and frustration prediction.",
"title": ""
},
{
"docid": "c7a15659f2fe5f67da39b77a3eb19549",
"text": "Privacy breaches and their regulatory implications have attracted corporate attention in recent times. An often overlooked cause of privacy breaches is human error. In this study, we first apply a model based on the widely accepted GEMS error typology to analyze publicly reported privacy breach incidents within the U.S. Then, based on an examination of the causes of the reported privacy breach incidents, we propose a defense-in-depth solution strategy founded on error avoidance, error interception, and error correction. Finally, we illustrate the application of the proposed strategy to managing human error in the case of the two leading causes of privacy breach incidents. This study finds that mistakes in the information processing stage constitute the most cases of human errorrelated privacy breach incidents, clearly highlighting the need for effective policies and their enforcement in organizations. a 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e4227748f8fd9704aba160669dcdef52",
"text": "Broadly, artificial intelligence (AI) mainly entails technology constellations such as machine learning, natural language processing, perception, and reasoning since it is difficult to define [1]. Even though the field’s application and principles have undergone investigation for more than sixty-five years, modern improvements, attendant society excitement, and uses ensured its return to focus. The influence of the previous artificial intelligence systems is evident, introducing both opportunities and challenges, which enables the integration of future AI advances into the economic and social environments. It is apparent that most people today view AI as a robotics concept but it essentially incorporates broader technology ranges that are used widely [2]. From search engines to speech recognition, to learning/gaming structures and object detection, AI application has the potential to intensify in the human daily lives. The application is already experiencing use in the world of business as companies seek to study the needs of the consumers, as well as, other fields including healthcare and crime investigation. In this paper, I will discuss the perceptions of consumers regarding artificial intelligence and outline its impact in retail, healthcare, crime investigation, and employment.",
"title": ""
},
{
"docid": "4791e1e3ccde1260887d3a80ea4577b6",
"text": "The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "9abd7aedf336f32abed7640dd3f4d619",
"text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.",
"title": ""
},
{
"docid": "c1672220aef9aa7a6257d8ff644ae378",
"text": "We present Component-Based Simplex Architecture (CBSA), a new framework for assuring the runtime safety of component-based cyber-physical systems (CPSs). CBSA integrates Assume-Guarantee (A-G) reasoning with the core principles of the Simplex control architecture to allow component-based CPSs to run advanced, uncertified controllers while still providing runtime assurance that A-G contracts and global properties are satisfied. In CBSA, multiple Simplex instances, which can be composed in a nested, serial or parallel manner, coordinate to assure system-wide properties. Combining A-G reasoning and the Simplex architecture is a challenging problem that yields significant benefits. By utilizing A-G contracts, we are able to compositionally determine the switching logic for CBSAs, thereby alleviating the state explosion encountered by other approaches. Another benefit is that we can use A-G proof rules to decompose the proof of system-wide safety assurance into sub-proofs corresponding to the component-based structure of the system architecture. We also introduce the notion of coordinated switching between Simplex instances, a key component of our compositional approach to reasoning about CBSA switching logic. We illustrate our framework with a component-based control system for a ground rover. We formally prove that the CBSA for this system guarantees energy safety (the rover never runs out of power), and collision freedom (the rover never collides with a stationary obstacle). We also consider a CBSA for the rover that guarantees mission completion: all target destinations visited within a prescribed amount of time.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "670ad989fb45d87b898aafe571bac3a9",
"text": "As an emerging technology to support scalable content-based image retrieval (CBIR), hashing has recently received great attention and became a very active research domain. In this study, we propose a novel unsupervised visual hashing approach called semantic-assisted visual hashing (SAVH). Distinguished from semi-supervised and supervised visual hashing, its core idea is to effectively extract the rich semantics latently embedded in auxiliary texts of images to boost the effectiveness of visual hashing without any explicit semantic labels. To achieve the target, a unified unsupervised framework is developed to learn hash codes by simultaneously preserving visual similarities of images, integrating the semantic assistance from auxiliary texts on modeling high-order relationships of inter-images, and characterizing the correlations between images and shared topics. Our performance study on three publicly available image collections: Wiki, MIR Flickr, and NUS-WIDE indicates that SAVH can achieve superior performance over several state-of-the-art techniques.",
"title": ""
},
{
"docid": "5325beaeca7307b20d18b0ce79a2819e",
"text": "It is becoming increasingly necessary for organizations to build a Cyber Threat Intelligence (CTI) platform to fight against sophisticated attacks. To reduce the risk of cyber attacks, security administrators and/or analysts can use a CTI platform to aggregate relevant threat information about adversaries, targets and vulnerabilities, analyze it and share key observations from the analysis with collaborators. In this paper, we introduce CyTIME (Cyber Threat Intelligence ManagEment framework) which is a framework for managing CTI data. CyTIME can periodically collect CTI data from external CTI data repositories via standard interfaces such as Trusted Automated Exchange of Indicator Information (TAXII). In addition, CyTIME is designed to automatically generate security rules without human intervention to mitigate discovered new cybersecurity threats in real time. To show the feasibility of CyTIME, we performed experiments to measure the time to complete the task of generating the security rule corresponding to a given CTI data. We used 1,000 different CTI files related to network attacks. Our experiment results demonstrate that CyTIME automatically generates security rules and store them into the internal database within 12.941 seconds on average (max = 13.952, standard deviation = 0.580).",
"title": ""
},
{
"docid": "0745755e5347c370cdfbeca44dc6d288",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
}
] |
scidocsrr
|
69cbaef53a5a576da73958ed652fc884
|
Data-aware process mining: discovering decisions in processes using alignments
|
[
{
"docid": "b3112fd3f8bfb5e4a235e17287a2ed50",
"text": "The growing complexity of processes in many organizations stimulates the adoption of business process analysis techniques. Typically, such techniques are based on process models and assume that the operational processes in reality conform to these models. However, experience shows that reality often deviates from hand-made models. Therefore, the problem of checking to what extent the operational process conforms to the process model is important for process management, process improvement, and compliance. In this paper, we present a robust replay analysis technique that is able to measure the conformance of an event log for a given process model. The approach quantifies conformance and provides intuitive diagnostics (skipped and inserted activities). Our technique has been implemented in the ProM 6framework. Comparative evaluations show that the approach overcomes many of the limitations of existing conformance checking techniques.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] |
[
{
"docid": "157b5612644d4d7e1818932108d9119b",
"text": "This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.",
"title": ""
},
{
"docid": "39862224afffd60e8ef93e070ceec67e",
"text": "In the last decade, pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease (AD) have been the subject of extensive research. Deep learning has recently been a great interest in AD classification. Previous works had done almost on single modality dataset, such as Magnetic Resonance Imaging (MRI) or Positron Emission Tomography (PET), shown high performances. However, identifying the distinctions between Alzheimer's brain data and healthy brain data in older adults (age > 75) is challenging due to highly similar brain patterns and image intensities. The corporation of multimodalities can solve this issue since it discovers and uses the further complementary of hidden biomarkers from other modalities instead of only one, which itself cannot provide. We therefore propose a deep learning method on fusion multimodalities. In details, our approach includes Sparse Autoencoder (SAE) and convolution neural network (CNN) train and test on combined PET-MRI data to diagnose the disease status of a patient. We focus on advantages of multimodalities to help providing complementary information than only one, lead to improve classification accuracy. We conducted experiments in a dataset of 1272 scans from ADNI study, the proposed method can achieve a classification accuracy of 90% between AD patients and healthy controls, demonstrate the improvement than using only one modality.",
"title": ""
},
{
"docid": "b06701739f8aa8c163101a91863cb523",
"text": "This paper presents a new approach for hiding information in speech signals. In this method, the silence intervals of speech are found and the length (number of samples) of these intervals is changed to hide information. This method can be used simultaneously with other methods.",
"title": ""
},
{
"docid": "88a15c0efdfeba3e791ea88862aee0c3",
"text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.",
"title": ""
},
{
"docid": "1a9e2481abf23501274e67575b1c9be6",
"text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utility†for the “majority†and a minimum of an individual regret for the “opponentâ€. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "576091bb08f9a37e0be8c38294e155e3",
"text": "This research will demonstrate hacking techniques on the modern automotive network and describe the design and implementation of a benchtop simulator. In currently-produced vehicles, the primary network is based on the Controller Area Network (CAN) bus described in the ISO 11898 family of protocols. The CAN bus performs well in the electronically noisy environment found in the modern automobile. While the CAN bus is ideal for the exchange of information in this environment, when the protocol was designed security was not a priority due to the presumed isolation of the network. That assumption has been invalidated by recent, well-publicized attacks where hackers were able to remotely control an automobile, leading to a product recall that affected more than a million vehicles. The automobile has a multitude of electronic control units (ECUs) which are interconnected with the CAN bus to control the various systems which include the infotainment, light, and engine systems. The CAN bus allows the ECUs to share information along a common bus which has led to improvements in fuel and emission efficiency, but has also introduced vulnerabilities by giving access on the same network to cyber-physical systems (CPS). These CPS systems include the anti-lock braking systems (ABS) and on late model vehicles the ability to turn the steering wheel and control the accelerator. Testing functionality on an operational vehicle can be dangerous and place others in harm's way, but simulating the vehicle network and functionality of the ECUs on a bench-top system provides a safe way to test for vulnerabilities and to test possible security solutions to prevent CPS access over the CAN bus network. This paper will describe current research on the automotive network, provide techniques in capturing network traffic for playback, and demonstrate the design and implementation of a benchtop system for continued research on the CAN bus.",
"title": ""
},
{
"docid": "90b21e8edcb993f472fe516dff22ae84",
"text": "Urticaria is a kind of skin rash that sometimes caused by allergic reactions. Acute viral infection, stress, pressure, exercise and sunlight are some other causes of urticaria. However, chronic urticaria and angioedema could be either idiopathic or caused by autoimmune reaction. They last more than six weeks and could even persist for a very long time. It is thought that the level of C-reactive protein CRP increases and the level of Erythrocyte sedimentation rate (ESR) decreases in patients with chronic urticaria. Thirty four patients with chronic or recurrent urticaria were selected for the treatment with wet cupping. Six of them, because of having a history of recent infection/cold urticaria, were eliminated and the remaining 28 were chosen for this study. ESR and CRP were measured in these patients aged 21-59, comprising 12 females and 16 males, ranged from 5-24 mm/h for ESR with a median 11 mm/h and 3.3-31.2 mg/L with a median of 11.95 mg/L for CRP before and after phlebotomy (250-450mL) which was performed as a control for wet cupping therapy. Three weeks after phlebotomy, wet cupping was performed on the back of these patients between two shoulders and the levels of ESR and CRP were measured again three weeks after wet cupping. The changes were observed in the level of CRP and ESR after phlebotomy being negligible. However, the level of CRP with a median 11.95 before wet cupping dramatically dropped to 1.1 after wet cupping. The level ESR also with a median 11 before wet cupping rose to 15.5 after wet cupping therapy. The clear correlation between the urticaria/angioedema and the rise of CRP was observed as was anticipated. No recurrence has been observed on twenty five of these patients and three of them are still recovering from the lesions.",
"title": ""
},
{
"docid": "95d1a35068e7de3293f8029e8b8694f9",
"text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.",
"title": ""
},
{
"docid": "0798ed2ff387823bcd7572a9ddf6a5e1",
"text": "We present a novel algorithm for point cloud segmentation using group convolutions. Our approach uses a radial basis function (RBF) based variational autoencoder (VAE) network. We transform unstructured point clouds into regular voxel grids and use subvoxels within each voxel to encode the local geometry using a VAE architecture. In order to handle sparse distribution of points within each voxel, we use RBF to compute a local, continuous representation within each subvoxel. We extend group equivariant convolutions to 3D point cloud processing and increase the expressive capacity of the neural network. The combination of RBF and VAE results in a good volumetric representation that can handle noisy point cloud datasets and is more robust for learning. We highlight the performance on standard benchmarks and compare with prior methods. In practice, our approach outperforms state-of-the-art segmentation algorithms on the ShapeNet and S3DIS datasets.",
"title": ""
},
{
"docid": "54ec681832cd276b6641f7e7e08205a7",
"text": "In this paper, we proposed PRPRS (Personalized Research Paper Recommendation System) that designed expansively and implemented a UserProfile-based algorithm for extracting keyword by keyword extraction and keyword inference. If the papers don't have keyword section, we consider the title and text as an argument of keyword and execute the algorithm. Then, we create the possible combination from each word of title. We extract the combinations presented in the main text among the longest word combinations which include the same words. If the number of extracted combinations is more than the standard number, we used that combination as keyword. Otherwise, we refer the main text and extract combination as much as standard in order of high Term-Frequency. Whenever collected research papers by topic are selected, a renewal of UserProfile increases the frequency of each Domain, Topic and keyword. Each ratio of occurrence is recalculated and reflected on UserProfile. PRPRS calculates the similarity between given topic and collected papers by using Cosine Similarity which is used to recommend initial paper for each topic in Information retrieval. We measured satisfaction and accuracy for each system-recommended paper to test and evaluated performances of the suggested system. Finally PRPRS represents high level of satisfaction and accuracy.",
"title": ""
},
{
"docid": "bbe3551f2ed95dc2ca08dcff67186fba",
"text": "A high-dimensional shape transformation posed in a mass-preserving framework is used as a morphological signature of a brain image. Population differences with complex spatial patterns are then determined by applying a nonlinear support vector machine (SVM) pattern classification method to the morphological signatures. Significant reduction of the dimensionality of the morphological signatures is achieved via wavelet decomposition and feature reduction methods. Applying the method to MR images with simulated atrophy shows that the method can correctly detect subtle and spatially complex atrophy, even when the simulated atrophy represents only a 5% variation from the original image. Applying this method to actual MR images shows that brains can be correctly determined to be male or female with a successful classification rate of 97%, using the leave-one-out method. This proposed method also shows a high classification rate for old adults' age classification, even under difficult test scenarios. The main characteristic of the proposed methodology is that, by applying multivariate pattern classification methods, it can detect subtle and spatially complex patterns of morphological group differences which are often not detectable by voxel-based morphometric methods, because these methods analyze morphological measurements voxel-by-voxel and do not consider the entirety of the data simultaneously.",
"title": ""
},
{
"docid": "86357a666bc949b0c6d314563634ddbd",
"text": "We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)-pairs of points in source and target sets, where each point is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "60d807b2bbd3106a0e359c66805b403a",
"text": "The existing word representation methods mostly limit their information source to word co-occurrence statistics. In this paper, we introduce ngrams into four representation methods: SGNS, GloVe, PPMI matrix, and its SVD factorization. Comprehensive experiments are conducted on word analogy and similarity tasks. The results show that improved word representations are learned from ngram cooccurrence statistics. We also demonstrate that the trained ngram representations are useful in many aspects such as finding antonyms and collocations. Besides, a novel approach of building co-occurrence matrix is proposed to alleviate the hardware burdens brought by ngrams.",
"title": ""
},
{
"docid": "8c9f82b50cd541ed0efe1089b098e426",
"text": "This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition.",
"title": ""
},
{
"docid": "e6567825361e13418a101919cdccce96",
"text": "In this paper, we propose a novel explanation module to explain the predictions made by a deep network. The explanation module works by embedding a high-dimensional deep network layer nonlinearly into a low-dimensional explanation space while retaining faithfulness, so that the original deep learning predictions can be constructed from the few concepts extracted by the explanation module. We then visualize such concepts for human to learn about the high-level concepts that deep learning is using to make decisions. We propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for learning the embedding to the explanation space. SRAE aims to reconstruct part of the original feature space while retaining faithfulness. A pull-away term is applied to SRAE to make the explanation space more orthogonal. A visualization system is then introduced for human understanding of the features in the explanation space. The proposed method is applied to explain CNN models in image classification tasks, and several novel metrics are introduced to evaluate the performance of explanations quantitatively without human involvement. Experiments show that the proposed approach generates interesting explanations of the mechanisms CNN use for making predictions.",
"title": ""
},
{
"docid": "7e17c1842a70e416f0a90bdcade31a8e",
"text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.",
"title": ""
},
{
"docid": "ebb941fe8b0807a4dcfe02ff898cf99f",
"text": "Using “Analyze Results” at the Web of Science, one can directly generate overlays onto global journal maps of science. The maps are based on the 10,000+ journals contained in the Journal Citation Reports (JCR) of the Science and Social Science Citation Indices (2011). The disciplinary diversity of the retrieval is measured in terms of Rao-Stirling’s “quadratic entropy.” Since this indicator of interdisciplinarity is normalized between zero and one, the interdisciplinarity can be compared among document sets and across years, cited or citing. The colors used for the overlays are based on Blondel et al.’s (2008) community-finding algorithms operating on the relations journals included in JCRs. The results can be exported from VOSViewer with different options such as proportional labels, heat maps, or cluster density maps. The maps can also be web-started and/or animated (e.g., using PowerPoint). The “citing” dimension of the aggregated journal-journal citation matrix was found to provide a more comprehensive description than the matrix based on the cited archive. The relations between local and global maps and their different functions in studying the sciences in terms of journal literatures are further discussed: local and global maps are based on different assumptions and can be expected to serve different purposes for the explanation.",
"title": ""
},
{
"docid": "ca93e2d0af218e5c8a286ff5f3e0e02b",
"text": "Educational justice is a major global challenge. In most underdeveloped countries, many students do not have access to education and in most advanced democracies, school attainment and success are still, to a large extent, dependent on a student’s social background. However, it has often been argued that social justice is an essential part of teachers’ work in a democracy. This article raises an important overriding question: how can we realize the goal of educational justice in the field of teaching? In this essay, I examine culturally responsive teaching as an educational practice and conclude that it is possible to realize educational justice in the field of teaching because in its true implementation, culturally responsive teaching conceptualizes the connection between education and social justice and creates the space needed for discussing social change in society.",
"title": ""
},
{
"docid": "e72404a8c759d19567d5508fd3047167",
"text": "The idea of Linked Data is to aggregate, harmonize, integrate, enrich, and publish data for re-use on the Web in a cost-efficient way using Semantic Web technologies. We concern two major hindrances for re-using Linked Data: It is often difficult for a re-user to 1) understand the characteristics of the dataset and 2) evaluate the quality the data for the intended purpose. This paper introduces the “Linked Data Finland” platform LDF.fi addressing these issues. We extend the famous 5-star model of Tim Berners-Lee, with the sixth star for providing the dataset with a schema that explains the dataset, and the seventh star for validating the data against the schema. LDF.fi also automates data publishing and provides data curation tools. The first prototype of the platform is available on the web as a service, hosting tens of datasets and supporting several applications. 1 Publishing Linked Data Lots of Linked Data (LD) platforms have emerged on the Web since the publication of the four Linked Data publication principles and the 5-star model1. For example, in Life Sciences alone there are LinkedLifeData2, NeuroCommons3, Chem2Bio2RDF4, HCLSIG/LODD5, BioLOD6, and Bio2RDF7. LDF.fi8 contributes to the current state-of-the-art of Linked Data publishing [2] as follows: 1) We propose extending the 5-star model9 into a 7-star model, with the goal of encouraging data publishers to provide their data with explicit metadata schemas and to validate their data for better quality. 2) LDF.fi automates the data publishing process so that not only a SPARQL endpoint but also a rich set of additional data services are generated automatically based on the metadata about the dataset and its graphs. 3) LDF.fi 1 http://www.w3.org/DesignIssues/LinkedData.html 2 http://linkedlifedata.com/ 3 http://neurocommons.org/ 4 http://chem2bio2rdf.wikispaces.com/ 5 http://www.w3.org/wiki/HCLSIG/LODD 6 http://biolod.org/ 7 http://bio2rdf.org/ 8 Our work is funded by Tekes and a consortium of 20 public organizations and companies. 9 http://5stardata.info/ provides end users with additional tools and documentation for publishing, curating, and re-using the datasets. This paper first explains these ideas, and then presents the actual service available online10. 2 7-star Linked Data A major hindrance of re-using a dataset is the difficulty to evaluate how suitable the data is for the application purpose at hand. Datasets often use schemas (vocabularies) for which definitions or descriptions are not available, but are embedded in the data itself. This makes it difficult to figure out the characteristics of the data. Furthermore, given the data and its schema it may be difficult to say how well the data actually matches the schema; there are lots of data quality problems on the Semantic Web11. To address these issues, we encourage data publishers by two extra stars: – The 6th star is given if the schemas (vocabularies) used in the dataset are explicitly described and published alongside the dataset, unless the schemas are already available somewhere on the Web. – For the 7th star, the quality of the dataset against the schemas used in it must be explicated, so that the user can evaluate whether the data quality matches her needs. LDF.fi provides supporting tools related to these issues: First, schemas are documented automatically for the human reader by using a schema documentation generator. In our case, the LODE12 online service is employed. (Other possible tools for schema documentation include SpecGen, Neologism13, dowl14, Parrot15, OWLDoc16, and OntologyBrowser17.) Second, in order to find out how schemas are actually used in a dataset, we created a new service http://vocab.at [1]. It analyses a dataset, creates an HTML report that explains vocabulary usage in the data, and reports issues of undefined properties or unresolvable namespaces. The input for vocab.at is either an RDF file, a SPARQL endpoint, or an HTML page with embedded RDFa markup. 3 Automatic Service Generation LDF.fi tries to automate the process of publishing datasets as far as possible in the following way: The publisher is expected to create an RDF dataset with minimal metadata about it and its schemas. Here an extended version of the new W3C Service Description recommendation18 and the VoID vocabulary19 can be used, and the data is stored 10 http://www.ldf.fi/ 11 http://pedantic-web.org/ 12 http://www.essepuntato.it/lode 13 http://neologism.deri.ie/ 14 https://github.com/ldodds/dowl 15 http://ontorule-project.eu/parrot/parrot 16 http://code.google.com/p/co-ode-owl-plugins/wiki/OWLDoc 17 http://code.google.com/p/ontology-browser/ 18 http://www.w3.org/TR/sparql11-service-description/ 19 http://rdfs.org/ns/void into the SPARQL endpoint. Alternatively, a simple JSON object listing the dataset and graph names, human readable labels, and a description of the data can be provided. In the metadata, it is also possible to give an example URI pointing into the dataset, a SPARQL query example for querying the data, and optionally a link to possible visualizations of the dataset. Based on such metadata, LDF.fi generates for each dataset a home page on which the following functionalities are available for re-users: 1. Links for downloading datasets and graphs are provided (if licensing permits it). 2. Schemas can be downloaded if provided with the data, and links to their documentation are provided (when available). 3. Following forms are created for inspecting the dataset in more detail: 1) Given a URI the corresponding RDF description can be read in various formats (Turtle, RDF/XML, RDF/JSON, N3, N-triples) for human consumption in a browser. The example URI is used as a first choice to try out. 2) Given a URI, Linked Data browsing can be started from it, with the example URI as a starting point. 4. There is a SPARQL query form for querying the service with the given query used as a first example. 5. Links providing Vocab.at analysis reports of the graphs in the dataset are provided. They tell the end-user what schemas (vocabularies) are used in the data, and how they have been used. Issues on data quality are pointed out. 6. SPARQL Service Descriptions of the datasets are provided, if available. LDF uses W3C SPARQL Service Description recommendation for this. 7. Links to visualizations of the data that may give the re-user more insight on how the dataset can be used in applications. 8. Licensing conditions of the dataset are provided as well as a label of 1–7 stars. 4 Data Curation Tools Data curation refers to activities and processes done to create, manage, maintain, and validate data. In LDF.fi several data curation services are available for analyzing textual data and for creating semantic annotations (semi-)automatically from them: 1. SeCo Lexical Analysis Services20 can be used for language recognition, lemmatization, morphological analysis, inflected form generation, and hyphenation. 2. ARPA Automatic Text Annotation System21 can be used for extracting Linked Data from unstructured texts. 3. SAHA22 tool can be used for investigating and editing LDF.fi datasets interactively in real time. In LDF.fi we modified and extended SAHA to work on top of any standard SPARQL endpoint. SAHA is now used as a Linked Data Browser in LDF.fi in the same vein as, e.g., URIBurner23. Using SAHA as an editor service for a dataset requires permission from the LDF.fi team. 20 http://demo.seco.tkk.fi/las/ 21 http://www.seco.tkk.fi/services/arpa/ 22 http://www.seco.tkk.fi/tools/saha 23 http://linkeddata.uriburner.com/ In our work, we are also using some external tools, such as the SILK Framework24 for linking data.",
"title": ""
}
] |
scidocsrr
|
1dcd404d90c9634853b230ee6ba098a3
|
Advanced encryption standard (AES) security enhancement using hybrid approach
|
[
{
"docid": "fe944f1845eca3b0c252ada2c0306d61",
"text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.",
"title": ""
}
] |
[
{
"docid": "57991cdfd00786c929d1a909ba22cbee",
"text": "This system description explains how to use several bilingual dictionaries and aligned corpora in order to create translation candidates for novel language pairs. It proposes (1) a graph-based approach which does not depend on cyclical translations and (2) a combination of this method with a collocation-based model using the multilingually aligned Europarl corpus.",
"title": ""
},
{
"docid": "aac17c2c975afaa3f55e42e698d398b3",
"text": "Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.",
"title": ""
},
{
"docid": "fed5178c0641d5c0f8e10856544e18b4",
"text": "Data mining is gaining popularity in disparate research fields due to its boundless applications and approaches to mine the data in an appropriate manner. Owing to the changes, the current world acquiring, it is one of the optimal approach for approximating the nearby future consequences. Along with advanced researches in healthcare monstrous of data are available, but the main difficulty is how to cultivate the existing information into a useful practices. To unfold this hurdle the concept of data mining is the best suited. Data mining have a great potential to enable healthcare systems to use data more efficiently and effectively. Hence, it improves care and reduces costs. This paper reviews various Data Mining techniques such as classification, clustering, association, regression in health domain. It also highlights applications, challenges and future work of Data Mining in healthcare.",
"title": ""
},
{
"docid": "374b3e207a868c388f0b814c457f6871",
"text": "BACKGROUND\nQuadriceps strengthening exercises are part of the treatment of patellofemoral pain (PFP), but the heavy resistance exercises may aggravate knee pain. Blood flow restriction (BFR) training may provide a low-load quadriceps strengthening method to treat PFP.\n\n\nMETHODS\nSeventy-nine participants were randomly allocated to a standardised quadriceps strengthening (standard) or low-load BFR. Both groups performed 8 weeks of leg press and leg extension, the standard group at 70% of 1 repetition maximum (1RM) and the BFR group at 30% of 1RM. Interventions were compared using repeated-measures analysis of variance for Kujala Patellofemoral Score, Visual Analogue Scale for 'worst pain' and 'pain with daily activity', isometric knee extensor torque (Newton metre) and quadriceps muscle thickness (cm). Subgroup analyses were performed on those participants with painful resisted knee extension at 60°.\n\n\nRESULTS\nSixty-nine participants (87%) completed the study (standard, n=34; BFR, n=35). The BFR group had a 93% greater reduction in pain with activities of daily living (p=0.02) than the standard group. Participants with painful resisted knee extension (n=39) had greater increases in knee extensor torque with BFR than standard (p<0.01). No between-group differences were found for change in Kujala Patellofemoral Score (p=0.31), worst pain (p=0.24), knee extensor torque (p=0.07) or quadriceps thickness (p=0.2). No difference was found between interventions at 6 months.\n\n\nCONCLUSION\nCompared with standard quadriceps strengthening, low load with BFR produced greater reduction in pain with daily living at 8 weeks in people with PFP. Improvements were similar between groups in worst pain and Kujala score. The subgroup with painful resisted knee extension had larger improvements in quadriceps strength from BFR.\n\n\nTRIAL REGISTRATION NUMBER\n12614001164684.",
"title": ""
},
{
"docid": "dee2a7984eba3d82d878a862a5fb3b85",
"text": "Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs. In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages. In particular, we focus on translating text to code signature representations using the software component datasets of Richardson and Kuhn (2017a,b). The advantage of such models is that they can be used for parsing a wide variety of input natural languages and output programming languages, or mixed input languages, using a single unified model. To facilitate modeling of this type, we develop a novel graph-based decoding framework that achieves state-of-the-art performance on the above datasets, and apply this method to two other benchmark SP tasks.",
"title": ""
},
{
"docid": "98af4946349aabb95d98dde19344cd4f",
"text": "Automatic speaker recognition is a field of study attributed in identifying a person from a spoken phrase. The technique makes it possible to use the speaker’s voice to verify their identity and control access to the services such as biometric security system, voice dialing, telephone banking, telephone shopping, database access services, information services, voice mail, and security control for confidential information areas and remote access to the computers. This thesis represents a development of a Matlab based text dependent speaker recognition system. Mel Frequency Cepstrum Coefficient (MFCC) Method is used to extract a speaker’s discriminative features from the mathematical representation of the speech signal. After that Vector Quantization with VQ-LBG Algorithm is used to match the feature. Key-Words: Speaker Recognition, Human Speech Signal Processing, Vector Quantization",
"title": ""
},
{
"docid": "bf62f0bcbc39e98baa39a4a661a3767f",
"text": "Inertia-visual sensor fusion has become popular due to the complementary characteristics of cameras and IMUs. Once the spatial and temporal alignment between the sensors is known, the fusion of measurements of these devices is straightforward. Determining the alignment, however, is a challenging problem. Especially the spatial translation estimation has turned out to be difficult, mainly due to limitations of camera dynamics and noisy accelerometer measurements. Up to now, filtering-based approaches for this calibration problem are largely prevalent. However, we are not convinced that calibration, as an offline step, is necessarily a filtering issue, and we explore the benefits of interpreting it as a batch-optimization problem. To this end, we show how to model the IMU-camera calibration problem in a nonlinear optimization framework by modeling the sensors' trajectory, and we present experiments comparing this approach to filtering and system identification techniques. The results are based both on simulated and real data, showing that our approach compares favorably to conventional methods.",
"title": ""
},
{
"docid": "6f13d2d8e511f13f6979859a32e68fdd",
"text": "As an innovative measurement technique, the so-called Fiber Bragg Grating (FBG) sensors are used to measure local and global strains in a growing number of application scenarios. FBGs facilitate a reliable method to sense strain over large distances and in explosive atmospheres. Currently, there is only little knowledge available concerning mechanical properties of FGBs, e.g. under quasi-static, cyclic and thermal loads. To address this issue, this work quantifies typical loads on FGB sensors in operating state and moreover aims to determine their mechanical response resulting from certain load cases. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "4306562027d20e3bfcbf48fd493114e3",
"text": "Our aim is to develop the service robot based on a systematic software engineering method, particularly for real-time, embedded and distributed systems with UML. To do so, we applied the COMET method, which is a UML-based method for the development of concurrent applications, specifically distributed and real-time applications. We describe our experience of applying the COMET/UML method to developing the service robot for the elderly, T-Rot, which is under development at CIR. Here, our emphasis was on an autonomous navigation system for the service robot, which is one of the most challenging issues and is essential in developing service robots, especially mobile service robots to help elderly people. It includes hardware integration for various sensors and actuators as well as software development and integration of modules like a path planner and a localizer.",
"title": ""
},
{
"docid": "9df09e27a1570c8d0a2fb42b8db2aa78",
"text": "Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.",
"title": ""
},
{
"docid": "5a427e80f2b94067dda9a689012bcff0",
"text": "In this letter, a modified broadband 90◦ phase shifter is proposed. By using a dentate microstrip and a patterned ground plane, an extremely tight coupling can be obtained, and consequently a constant phase shift over a wide bandwidth can be achieved. To verify the proposed idea, a topology is implemented, the measured results of a phase difference of 90 ± 5◦ in 79.5% bandwidth, better than 10 dB return loss across the whole operating band, are also given. The measurement results agree well with the full-wave electromagnetic simulated responses.",
"title": ""
},
{
"docid": "c19396e701c117d6bae2f35ce8138f7c",
"text": "This paper presents the design results of the multi-band, multi-mode software-defined radar (SDR) system. The SDR platform consists of a multi-band RF modules of S, X, K-bands, and a multi-mode digital modules with a waveform generator for CW, Pulse, FMCW, and LFM Chirp as well as reconfigurable SDR-GUI software module for user interface. This platform can be used for various applications such as security monitoring, collision avoidance, traffic monitoring, and a radar imaging.",
"title": ""
},
{
"docid": "0dcfd748b2ea70de8b84b9056eb79fc4",
"text": "The number of resource-limited wireless devices utilized in many areas of Internet of Things is growing rapidly; there is a concern about privacy and security. Various lightweight block ciphers are proposed; this work presents a modified lightweight block cipher algorithm. A Linear Feedback Shift Register is used to replace the key generation function in the XTEA1 Algorithm. Using the same evaluation conditions, we analyzed the software implementation of the modified XTEA using FELICS (Fair Evaluation of Lightweight Cryptographic Systems) a benchmarking framework which calculates RAM footprint, ROM occupation and execution time on three largely used embedded devices: 8-bit AVR microcontroller, 16-bit MSP microcontroller and 32-bit ARM microcontroller. Implementation results show that it provides less software requirements compared to original XTEA. We enhanced the security level and the software performance.",
"title": ""
},
{
"docid": "5ba5f2abc8097bfb3d8465ccfd0418cd",
"text": "In regional water resources management and disaster preparedness, the analysis of extreme rainfall events is essential. The need to investigate drought and flood conditions is now heightened within the context of climate change and variability. The Standardised Precipitation Index (SPI) was employed to assess the extreme rainfall event on Tordzie watershed using precipitation data from 1984-2014. The SPI on the time scale of 3, 6, 9 and 12 months were determined using “DrinC” software. The drought was characterised into magnitude, duration, intensity, frequency, commencement and termination at the time scales of SPI-3, SPI-6, SPI-9 and SPI-12. Results indicated that the middle reaches (Kpetoe) of the watershed experienced less severe drought condition compared to the lower reaches (Tordzinu). Mann-Kendall (MK) test and Sen’s slope (SS) revealed general increasing drought trend but insignificant at 95% confidence interval. The SS indicated change in magnitude of 0.016 mm/year, 0.012 mm/year, 0.026 mm/year and 0.016 mm/year respectively at the mentioned time scales at 95% confidence interval at the Tordzinu and that of Kpetoe were 0.006 mm/year, 0.009 mm/year, 0.014 mm/year and 0.003 mm/year. These changes could have implication for agriculture and water resources management and engender food insecurity among smallholder farmers.",
"title": ""
},
{
"docid": "fd224b566e19290e98f4d8b81c47dfa7",
"text": "HTTP adaptive streaming is an attractive solution to the explosion of multimedia content consumption over the Internet, which has recently been introduced to information-centric networking in the form of DASH over CCN. In this paper, we enhance the performance of such design by taking advantage of congestion feedback available in ICN networks. By means of utility fairness optimization framework, we improve the adaptation logic in terms of fairness and stability of the multimedia bitrate delivered to content consumers. Interestingly, we find that such fairness and stability have a very positive impact on caching, making streaming adaptation highly friendly to the ubiquitous in-network caches of the ICN architectures.",
"title": ""
},
{
"docid": "4a8fa0edc026c1c0d44293ee3840b6dc",
"text": "We introduce an extended representation of time series that allows fast, accurate classification and clustering in addition to the ability to explore time series data in a relevance feedback framework. The representation consists of piecewise linear segments to represent shape and a weight vector that contains the relative importance of each individual linear segment. In the classification context, the weights are learned automatically as part of the training cycle. In the relevance feedback context, the weights are determined by an interactive and iterative process in which users rate various choices presented to them. Our representation allows a user to define a variety of similarity measures that can be tailored to specific domains. We demonstrate our approach on space telemetry, medical and synthetic data.",
"title": ""
},
{
"docid": "de52cd857eaef29801809d079bb3baf3",
"text": "Local structures of shadow boundaries as well as complex interactions of image regions remain largely unexploited by previous shadow detection approaches. In this paper, we present a novel learning-based framework for shadow region recovery from a single image. We exploit local structures of shadow edges by using a structured CNN learning framework. We show that using structured label information in classification can improve local consistency over pixel labels and avoid spurious labelling. We further propose and formulate shadow/bright measure to model complex interactions among image regions. The shadow and bright measures of each patch are computed from the shadow edges detected by the proposed CNN. Using the global interaction constraints on patches, we formulate a least-square optimization problem for shadow recovery that can be solved efficiently. Our shadow recovery method achieves state-of-the-art results on major shadow benchmark databases collected under various conditions.",
"title": ""
},
{
"docid": "0e758ff82eae43d705b6fde249b29998",
"text": "The continued growth of the World Wide Web makes the retrieval of relevant information for a user’s query increasingly difficult. Current search engines provide the user with many web pages, but varying levels of relevancy. In response, the Semantic Web has been proposed to retrieve and use more semantic information from the web. Our prior research has developed a Semantic Retrieval System to automate the processing of a user’s query while taking into account the query’s context. The system uses WordNet and the DARPA Agent Markup Language (DAML) ontologies to act as surrogates for understanding the context of terms in a user’s query. Like other applications that use ontologies, our system relies on using ‘good’ ontologies. This research draws upon semiotic theory to develop a suite of metrics that assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the “Ontology Auditor.” An initial validation of the Ontology Auditor on the DAML library of domain ontologies indicates that the metrics are feasible and highlight the wide variations in quality among ontologies in the library. Acknowledgments The authors wish to thank Xinlin Tang and Sunyoung Cho for comments on a previous draft. This research was supported by Oakland University and by Georgia State University.",
"title": ""
},
{
"docid": "0eff90e073f09e5bc0f298fba512abd4",
"text": "The issue of handwritten character recognition is still a big challenge to the scientific community. Several approaches to address this challenge have been attempted in the last years, mostly focusing on the English pre-printed or handwritten characters space. Thus, the need to attempt a research related to Arabic handwritten text recognition. Algorithms based on neural networks have proved to give better results than conventional methods when applied to problems where the decision rules of the classification problem are not clearly defined. Two neural networks were built to classify already segmented characters of handwritten Arabic text. The two neural networks correctly recognized 73% of the characters. However, one hurdle was encountered in the above scenario, which can be summarized as follows: there are a lot of handwritten characters that can be segmented and classified into two or more different classes depending on whether they are looked at separately, or in a word, or even in a sentence. In other words, character classification, especially handwritten Arabic characters, depends largely on contextual information, not only on topographic features extracted from these characters.",
"title": ""
},
{
"docid": "70c09a5331ed9a279f2f68cf5cae98b4",
"text": "A detailed analysis of electromagnetic noise in external rotor permanent-magnet synchronous motors is presented in this paper. First, the spatial distribution and frequency characteristics of the electromagnetic force acting on the surface of the permanent magnet are discussed. Then, calculation models for electromagnetic force, structural vibration, and acoustic radiation are developed to predict noise by taking an external rotor in-wheel motor as example. The uneven distribution of electromagnetic force on the surface of the permanent magnet is taken into account by means of loading nodal force into the structural model which is verified by modal test. Through the mode superposition method, the vibration on the surface of the outer rotor is calculated, and the acoustic boundary element method is used to predict the acoustic radiation. Noise test is conducted to validate the simulated noise. It is shown in both simulation results and noise test that the electromagnetic force due to slotting effects contributes the most remarkable component to the overall noise. Finally, slot opening width is optimized to reduce the amplitude of magnetic force close to resonance frequencies, and the overall sound pressure level decreases by 6 dB(A) after optimization.",
"title": ""
}
] |
scidocsrr
|
e9a1e4f363b04a9e5f8ed2f242f29e51
|
The security of RFID readers with IDS/IPS solution using Raspberry Pi
|
[
{
"docid": "9409922d01a00695745939b47e6446a0",
"text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "b1c6d95b297409a7b47d8fa7e6da6831",
"text": "~I \"e have modified the original model of selective attention, which was previmtsly proposed by Fukushima, and e~tended its ability to recognize attd segment connected characters in cmwive handwriting. Although the or~¢inal model q/'sdective attention ah'ead)' /tad the abilio' to recognize and segment patterns, it did not alwa)w work well when too many patterns were presented simuhaneousl): In order to restrict the nttmher q/patterns to be processed simultaneousO; a search controller has been added to the original model. Tlw new mode/mainly processes the patterns contained in a small \"search area, \" which is mo~vd b)' the search controller A ptvliminao' ev~eriment with compltter simttlatiott has shown that this approach is promisittg. The recogttition arid segmentation q[k'haracters can be sttcces~[itl even thottgh each character itt a handwritten word changes its .shape h)\" the e[]'ect o./the charactetw",
"title": ""
},
{
"docid": "bf8beafe5adb9426c7cf011f37990b44",
"text": "Band selection is a common approach to reduce the data dimensionality of hyperspectral imagery. It extracts several bands of importance in some sense by taking advantage of high spectral correlation. Driven by detection or classification accuracy, one would expect that, using a subset of original bands, the accuracy is unchanged or tolerably degraded, whereas computational burden is significantly relaxed. When the desired object information is known, this task can be achieved by finding the bands that contain the most information about these objects. When the desired object information is unknown, i.e., unsupervised band selection, the objective is to select the most distinctive and informative bands. It is expected that these bands can provide an overall satisfactory detection and classification performance. In this letter, we propose unsupervised band selection algorithms based on band similarity measurement. The experimental result shows that our approach can yield a better result in terms of information conservation and class separability than other widely used techniques.",
"title": ""
},
{
"docid": "1768d453edc06f95cebb869096552b74",
"text": "Although there have been tremendous advances in the understanding of human dysfunctions in the brain circuitry for self-reflection, emotion, and cognitive control, a brain-based taxonomy for mental disease is still lacking. As a result, these advances have not been translated into actionable clinical tools, and the language of brain circuits has not been incorporated into training programmes. To address this gap, I present this synthesis of published work, with a focus on functional imaging of circuit dysfunctions across the spectrum of mood and anxiety disorders. This synthesis provides the foundation for a taxonomy of putative types of dysfunction, which cuts across traditional diagnostic boundaries for depression and anxiety and includes instead distinct types of neural circuit dysfunction that together reflect the heterogeneity of depression and anxiety. This taxonomy is suited to specifying symptoms in terms of underlying neural dysfunction at the individual level and is intended as the foundation for building mechanistic research and ultimately guiding clinical practice.",
"title": ""
},
{
"docid": "1c72c4edd063a91e098da7cf2143d267",
"text": "/ n this chapter, we consider modesty and its importance. We begin by defining modesty, proceed to argue that being modest is hard work, and then lay out some reasons why this is so. Next, we make the case that modesty correlates with, and may even cause, several desirable outcomes—intrapersonal, interpersonal, and group. We conclude by attempting to reconcile the discrepancies between two empirical literatures, one suggesting that modesty entails social and mental health benefits, the other suggesting that self-enhancement does.",
"title": ""
},
{
"docid": "2601ff3b4af85883017d8fb7e28e5faa",
"text": "The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.",
"title": ""
},
{
"docid": "9c20a64fad54b5416b4716090a2e7c51",
"text": "Location-Based Social Networks (LBSNs) enable their users to share with their friends the places they go to and whom they go with. Additionally, they provide users with recommendations for Points of Interest (POI) they have not visited before. This functionality is of great importance for users of LBSNs, as it allows them to discover interesting places in populous cities that are not easy to explore. For this reason, previous research has focused on providing recommendations to LBSN users. Nevertheless, while most existing work focuses on recommendations for individual users, techniques to provide recommendations to groups of users are scarce.\n In this paper, we consider the problem of recommending a list of POIs to a group of users in the areas that the group frequents. Our data consist of activity on Swarm, a social networking app by Foursquare, and our results demonstrate that our proposed Geo-Group-Recommender (GGR), a class of hybrid recommender systems that combine the group geographical preferences using Kernel Density Estimation, category and location features and group check-ins outperform a large number of other recommender systems. Moreover, we find evidence that user preferences differ both in venue category and in location between individual and group activities. We also show that combining individual recommendations using group aggregation strategies is not as good as building a profile for a group. Our experiments show that (GGR) outperforms the baselines in terms of precision and recall at different cutoffs.",
"title": ""
},
{
"docid": "a4037343fa0df586946d8034b0bf8a5b",
"text": "Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnerability discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discovery process. I then describe the theoretical requirements of VDMs and highlight the shortcomings of existing work, particularly the assumption that vulnerability discovery is an independent process.",
"title": ""
},
{
"docid": "30cd626772ad8c8ced85e8312d579252",
"text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>",
"title": ""
},
{
"docid": "d5ea5a0b9484f6b728be4a4a6092c419",
"text": "In response to the rise of Big Data, modern enterprise architecture has become significantly more complex. Model driven engineering (MDE) has been proposed as a methodology for developing software to deal with complex integration and interoperability. Domain specific languages (DSLs) play a crucial role in MDE and represent languages for a specific purpose that are highly abstract and easy to use. In this paper we propose a new language VizDSL for creating interactive visualisations that facilitate the understanding of complex data and information structures for enterprise systems interoperability. In comparison to existing visualisation languages VizDSL provides the benefits of visualising the semantics of data using a graphical notation. VizDSL is based on the Interaction Flow Modelling Language (IFML) and Agile Visualisation and has been implemented in a prototype. The prototype has been applied on an open data set and results show that interactive visualisation can be implemented quickly using the VizDSL language without writing code which makes it easier to design for non- programmers.",
"title": ""
},
{
"docid": "072351b995d3f3ae76ecc666e84b3323",
"text": "An internal planar tablet computer antenna having a small size of 12 × 35 mm2 printed on a 0.8-mm thick FR4 substrate for the WWAN operation in the 824-960 and 1710-2170 MHz bands is presented. The antenna comprises a driven strip, a parasitic shorted strip and a ground pad, all printed on the small-size FR4 substrate. For bandwidth enhancement of the antenna's lower band, the antenna applies a parallel-resonant spiral slit embedded in the ground pad, which generates a parallel resonance at about 1.2 GHz and in turn results in a new resonance occurred nearby the quarter-wavelength mode of the parasitic shorted strip. This feature leads to a dual-resonance characteristic obtained for the antenna's lower band, making it capable of wideband operation to cover the desired 824-960 MHz with a small antenna size. The antenna's upper band is formed by the higher-order resonant mode contributed by the parasitic shorted strip and the quarter-wavelength resonant mode of the driven strip and can cover the desired 1710-2170 MHz band. Details of the proposed antenna and the operating principle of the parallel-resonant spiral slit are presented.",
"title": ""
},
{
"docid": "5e105c819b88d1fdfe34c4fa8bf480ba",
"text": "In this paper, we propose a real-time image superpixel segmentation method with 50 frames/s by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. In order to decrease the computational costs of superpixel algorithms, we adopt a fast two-step framework. In the first clustering stage, the DBSCAN algorithm with color-similarity and geometric restrictions is used to rapidly cluster the pixels, and then, small clusters are merged into superpixels by their neighborhood through a distance measurement defined by color and spatial features in the second merging stage. A robust and simple distance function is defined for obtaining better superpixels in these two steps. The experimental results demonstrate that our real-time superpixel algorithm (50 frames/s) by the DBSCAN clustering outperforms the state-of-the-art superpixel segmentation methods in terms of both accuracy and efficiency.",
"title": ""
},
{
"docid": "e219c7e4078a1577f0a515494cadb45f",
"text": "Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g., lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.",
"title": ""
},
{
"docid": "539d6afe431018b0ac62858ff59caa09",
"text": "Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies what to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. However, with the amount of cloud computing services increasing quickly, the need for a taxonomy framework rises. This paper examines the available cloud computing services and identifies and explains their main characteristics. Next, this paper organizes these characteristics and proposes a tree-structured taxonomy. This taxonomy allows quick classifications of the different cloud computing services and makes it easier to compare them. Based on existing taxonomies, this taxonomy provides more detailed characteristics and hierarchies. Additionally, the taxonomy offers a common terminology and baseline information for easy communication. Finally, the taxonomy is explained and verified using existing cloud services as examples.",
"title": ""
},
{
"docid": "fbeb296bbe9862b3679956cacc3cf2f2",
"text": "Events are central in human history and thus also in Web queries, in particular if they relate to history or news. However, ambiguity issues arise as queries may refer to ambiguous events differing in time, geography, or participating entities. Thus, users would greatly benefit if search results were presented along different events. In this paper, we present EventMiner, an algorithm that mines events from top-k pseudo-relevant documents for a given query. It is a probabilistic framework that leverages semantic annotations in the form of temporal expressions, geographic locations, and named entities to analyze natural language text and determine important events. Using a large news corpus, we show that using semantic annotations, EventMiner detects important events and presents documents covering the identified events in the order of their importance.",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "ef0625150b0eb6ae68a214256e3db50d",
"text": "Undergraduate engineering students require a practical application of theoretical concepts learned in classrooms in order to appropriate a complete management of them. Our aim is to assist students to learn control systems theory in an engineering context, through the design and implementation of a simple and low cost ball and plate plant. Students are able to apply mathematical and computational modelling tools, control systems design, and real-time software-hardware implementation while solving a position regulation problem. The whole project development is presented and may be assumed as a guide for replicate results or as a basis for a new design approach. In both cases, we end up in a tool available to implement and assess control strategies experimentally.",
"title": ""
},
{
"docid": "209842e00957d1d1786008d943895dc9",
"text": "The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of childfriendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that childfriendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these childfriendly spaces is not given priority within current spatial plans, despite the proven benefits of such. Keywords—Built environment, child-friendly spaces, green spaces. public places, urban area. E. J. Cilliers is a Professor at the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: juanee.cilliers@nwu.ac.za). Z. Goosen is a PhD student with the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: goosenzhangoosen@gmail.com). This research (or parts thereof) was made possible by the financial contribution of the NRF (National Research Foundation) South Africa. The opinions, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF does not accept any liability in regard thereto.",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
}
] |
scidocsrr
|
4d76df1e8d45517f3dde6b7f86c81a0c
|
Efficient Ranking from Pairwise Comparisons
|
[
{
"docid": "3e879b66bd7ea46ce642d6ffb30ec63d",
"text": "The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR’s TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e.g. player’s rating) is of interest to understanding the intensity of the preferences. In this paper, we propose a novel iterative rank aggregation algorithm for discovering scores for objects from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with edges present between two objects if they are compared; the scores turn out to be the stationary probability of this random walk. The algorithm is model independent. To establish the efficacy of our method, however, we consider the popular Bradley-Terry-Luce (BTL) model in which each object has an associated score which determines the probabilistic outcomes of pairwise comparisons between objects. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. This, in essence, leads to order-optimal dependence on the number of samples required to learn the scores well by our algorithm. Indeed, the experimental evaluation shows that our (model independent) algorithm performs as well as the Maximum Likelihood Estimator of the BTL model and outperforms a recently proposed algorithm by Ammar and Shah [1].",
"title": ""
}
] |
[
{
"docid": "fecf95aa956e9dde6e7a0743d58673b9",
"text": "Use of transactional multicore main-memory databases is growing due to dramatic increases in memory size and CPU cores available for a single machine. To leverage these resources, recent concurrency control protocols have been proposed for main-memory databases, but are largely optimized for specific workloads. Due to shifting and unknown access patterns, workloads may change and one specific algorithm cannot dynamically fit all varied workloads. Thus, it is desirable to choose the right concurrency control protocol for a given workload. To address this issue we present adaptive concurrency control (ACC), that dynamically clusters data and chooses the optimal concurrency control protocol for each cluster. ACC addresses three key challenges: i) how to cluster data to minimize cross-cluster access and maintain load-balancing, ii) how to model workloads and perform protocol selection accordingly, and iii) how to support mixed concurrency control protocols running simultaneously. In this paper, we outline these challenges and present preliminary results.",
"title": ""
},
{
"docid": "ee865e3291eff95b5977b54c22b59f19",
"text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.",
"title": ""
},
{
"docid": "54eea56f03b9b9f5983857550b83a5da",
"text": "This paper summarizes opportunities for silicon process technologies at mmwave and terahertz frequencies and demonstrates key building blocks for 94-GHz and 600-GHz imaging arrays. It reviews potential applications and summarizes state-of-the-art terahertz technologies. Terahertz focal-plane arrays (FPAs) for video-rate imaging applications have been fabricated in commercially available CMOS and SiGe process technologies respectively. The 3times5 arrays achieve a responsivity of up to 50 kV/W with a minimum NEP of 400 pW/radicHz at 600 GHz. Images of postal envelopes are presented which demonstrate the potential of silicon integrate 600-GHz terahertz FPAs for future low-cost terahertz camera systems.",
"title": ""
},
{
"docid": "2edb80a15e94bba579cbcedaf014d545",
"text": "Robots are becoming more ubiquitous in our society and taking over many tasks that were previously considered as human hallmarks. Many of these tasks, e.g., autonomously driving a car, collaborating with humans in dynamic and changing working conditions and performing household chores, require human-level intelligence to perceive the world and to act appropriately. In this thesis, we pursue a different approach compared to classical methods that often construct a robot controller based on the perception-then-action paradigm. We devise robotic action-selection policies by considering actionselection and perception processes as being intertwined, emphasizing that perception comes prior to action and action is key to perception. The main hypothesis is that complex robotic behaviors come as the result of mastering sensorimotor contingencies (SMCs), i.e., regularities between motor actions and associated changes in sensory observations, where SMCs can be seen as building blocks to skillful behaviors. We elaborate and investigate this hypothesis by deliberate design of frameworks which enable policy training merely based on data experienced by a robot, without intervention of human experts for analytical modelings or calibrations. In such circumstances, action policies can be obtained by reinforcement learning (RL) paradigm by making exploratory action decisions and reinforcing patterns of SMCs that lead to reward events for a given task. However, the dimensionality of sensorimotor spaces, complex dynamics of physical tasks, sparseness of reward events, limited amount of data from real-robot experiments, ambiguities of crediting past decisions and safety issues, which arise from exploratory actions of a physical robot, pose challenges to obtain a policy based on data-driven methods alone. In this thesis, we introduce our contributions to deal with the aforementioned issues by devising learning frameworks which endow a robot with the ability to integrate sensorimotor data to obtain action-selection policies. The effectiveness of the proposed frameworks is demonstrated by evaluating the methods on a number of real robotic tasks and illustrating the suitability of the methods to acquire different skills, to make sequential action-decisions in high-dimensional sensorimotor spaces, with limited data and sparse rewards.",
"title": ""
},
{
"docid": "448be7422a2c4fe5ba4858311a52a51a",
"text": "Every organization is associated with huge amount of information which is more valuable. Data is important and so it should be consistent, accurate and correct. Today many approaches are used to protect the data as well as networks from attackers (attacks like SQLIA, Brute-force attack). One way to make data more secure is using Intrusion Detection System (IDS). Many researches are done in this intrusion detection field but it mainly concentrated on networks and operating system. This approach is for database so that it will prevent the data loss, maintain consistency and accuracy. Database security research is concerned about the protection of database from unauthorized access. The unauthorized access may be in the form of execution of malicious transaction and this may lead to break the integrity of the system. Banking is one of the sectors which are suffering from million dollars losses only because of this unauthorized activities and malicious transactions. So, it is today's demand to detect malicious transactions and also to provide some recommendation. In this paper, we provided the detection system for the real-world problem of intrusion detection in the banking system and we are going to give some preventive measures to avoid or reduce future attacks. In order to detect malicious transactions, we used data mining algorithm for framing a data dependency miner for our banking database IDS. Our approach extracts the read-write dependency rules and then these rules are used to check whether the coming transaction is malicious or not. Our",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "351faf9d58bd2a2010766acff44dadbc",
"text": "صلاخلا ـ ة : ىلع قوفي ةيبرعلا ةغللاب نيثدحتملا ددع نأ نم مغرلا يتئام تنإ يف ةلوذبملا دوهجلا نأ لاإ ،صخش نويلم ةليلق ةيبوساحلا ةيبرعلا ةيوغللا رداصملا جا ادج ب ةيبوساحلا ةيبرعلا مجاعملا لاجم يف ةصاخ . بلغأ نإ ةيفاآ تسيل يهف اذلو ،ةيبنجأ تاغلل امنإ ،ةيبرعلا ةغلل لصلأا يف ممصت مل ةدوجوملا دوهجلا يبرعلا عمتجملا تاجايتحا دسل . فدهي حرتقم ضرع ىلإ ثحبلا اذه لأ جذومن ساح مجعم ةينقت ىلع ينبم يبو \" يجولوتنلأا \" اهيلع دمتعت يتلا ةيساسلأا تاينقتلا نم ةثيدح ةينقت يهو ، ةينقت \" ةيللادلا بيولا \" ام لاجم يف تاقلاعلاو ميهافملل يللادلا يفرعملا ليثمتلاب ىنعت ، . دقو ءانب مت لأا جذومن ةيرظن ساسأ ىلع \" ةيللادلا لوقحلا \" تايوغللا لاجم يف ةفورعملا ، و ت م اهساسأ ىلع ينب يتلا تانايبلا ءاقتسا لأا جذومن نم \" نامزلا ظافلأ \" يف \" ميركلا نآرقلا \" ، يذلا اهلامآو اهيقر يف ةيبرعلا هيلإ تلصو ام قدأ دعي . اذه لثم رفوت نإ لأا جذومن اعفان نوكيس ةيبرعلا ةغلل ةيبرعلا ةغللا لاجم يف ةيبوساحلا تاقيبطتلل . مت دقو م ضرع ثحبلا اذه يف ءانب ةيجهنمل لصف لأا جذومن اهيلإ لصوتلا مت يتلا جئاتنلاو .",
"title": ""
},
{
"docid": "6b1a1c36fa583391eb8b142368837bc3",
"text": "In this paper, we present design and simulation of a compact grid array microstrip patch antenna. In the design of antenna a RT/duroid 5880 substrate having relative permittivity, thickness and loss tangent of 2.2, 1.57 mm and 0.0009 respectively, has been used. The simulated antenna performance was obtained by Computer Simulation Technology Microwave Studio (CST MWS). The antenna performance was investigated by analyzing its return loss (S11), radiation pattern, voltage standing wave ratio (VSWR) parameters. The simulated S11 parameter has shown that antenna operates for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHZ ISM, 6.25 GHZ, 8.25 GHZ and 10.45 GHZ ultra-wideband (UWB) four resonance frequencies with bandwidth > 500MHz (S11 < −10dB). The antenna directivity increased towards higher frequencies. The VSWR of resonance frequency bands is also achieved succesfully less than 2. It has been observed that the simulation result values of the antenna are suitable for WBAN applications.",
"title": ""
},
{
"docid": "84a187b1e5331c4e7eb349c8b1358f14",
"text": "We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
},
{
"docid": "da5339bb74d6af2bfa7c8f46b4f50bb3",
"text": "Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.",
"title": ""
},
{
"docid": "114ec493a4b0b26c643a49bc0cc3c9c7",
"text": "Automatic emotion recognition has attracted great interest and numerous solutions have been proposed, most of which focus either individually on facial expression or acoustic information. While more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion at the feature and/or decision-level. In this paper, we introduce a novel approach using 3-dimensional convolutional neural networks (C3Ds) to model the spatio-temporal information, cascaded with multimodal deep-belief networks (DBNs) that can represent the audio and video streams. Experiments conducted on the eNTERFACE multimodal emotion database demonstrate that this approach leads to improved multimodal emotion recognition performance and significantly outperforms recent state-of-the-art proposals.",
"title": ""
},
{
"docid": "69fb72937745829046379800649b4f6f",
"text": "For a plane wave incident on either a Luneburg lens or a modified Luneburg lens, the magnitude and phase of the transmitted electric field are calculated as a function of the scattering angle in the context of ray theory. It is found that the ray trajectory and the scattered intensity are not uniformly convergent in the vicinity of edge ray incidence on a Luneburg lens, which corresponds to the semiclassical phenomenon of orbiting. In addition, it is found that rays transmitted through a large-focal-length modified Luneburg lens participate in a far-zone rainbow, the details of which are exactly analytically soluble in ray theory. Using these results, the Airy theory of the modified Luneburg lens is derived and compared with the Airy theory of the rainbows of a homogeneous sphere.",
"title": ""
},
{
"docid": "43741bb21c47889b7b0d8de372a4dacd",
"text": "Indoor localization or zonification in disaster affected settings is a challenging research problem. Existing studies encompass localization and tracking of first-responders or fire fighters using wireless sensor networks. In addition to that, fast evacuation, routing, and planning have also been proposed. However, the problem of locating survivors or victims is yet to be explored to the full potential. State-of-the-art literature often employ infrastructure dependent solutions, for example, WiFi localization using WiFi access points exploiting fingerprinting techniques, Pedestrian Dead Reckoning (PDR) starting from known locations, etc. Owing to unpredictable and dynamic nature of disaster affected environments, infrastructure dependent solutions are seldom useful. Therefore, in this study, we propose an ad hoc WiFi zonification technique (named as AWZone) that is independent of any infrastructural settings. AWZone attempts to perform localization through exploiting commodity smartphones as a beaconing device and successively searching and narrowing down the search space. We perform two testbed experiments. The results reveal that, for a single survivor or victim, AWZone can identify the search space and estimate a location with an approximate 1.5m localization error through eliminating incorrect zones from a set of possible results.",
"title": ""
},
{
"docid": "ba457819a7375c5dfee9ab870c56cc55",
"text": "A biometric system is vulnerable to a variety of attacks aimed at undermining the integrity of the authentication process. These attacks are intended to either circumvent the security afforded by the system or to deter the normal functioning of the system. We describe the various threats that can be encountered by a biometric system. We specifically focus on attacks designed to elicit information about the original biometric data of an individual from the stored template. A few algorithms presented in the literature are discussed in this regard. We also examine techniques that can be used to deter or detect these attacks. Furthermore, we provide experimental results pertaining to a hybrid system combining biometrics with cryptography, that converts traditional fingerprint templates into novel cryptographic structures.",
"title": ""
},
{
"docid": "54ef3b0ba6c2ac7830c78b828e58299f",
"text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker adaptation by adding a new output branch to the model and successfully training it without the need of modifying the base optimized model. This fine tuning method achieves better results than training the new speaker from scratch with its own model.",
"title": ""
},
{
"docid": "a4463088646e47825aa4b7cb05b51460",
"text": "Double-gate (DG) transistors have emerged as promising devices for nano-scale circuits due to their better scalability compared to bulk CMOS. Among the various types of DG devices, quasi-planar SOI FinFETs are easier to manufacture compared to planar double-gate devices. DG devices with independent gates (separate contacts to back and front gates) have recently been developed. DG devices with symmetric and asymmetric gates have also been demonstrated. Such device options have direct implications at the circuit level. Independent control of front and back gate in DG devices can be effectively used to improve performance and reduce power in sub-50nm circuits. Independent gate control can be used to merge parallel transistors in noncritical paths. This results in reduction in the effective switching capacitance and hence power dissipation. We show a variety of circuits in logic and memory that can benefit from independent gate operation of DG devices. As examples, we show the benefit of independent gate operation in circuits such as dynamic logic circuits, Schmitt triggers, sense amplifiers, and SRAM cells. In addition to independent gate option, we also investigate the usefulness of asymmetric devices and the impact of width quantization and process variations on circuit design.",
"title": ""
},
{
"docid": "69bfc5edab903692887371464d6eecb0",
"text": "In recent days text summarization had tremendous growth in all languages, especially in India regional languages. Yet the performance of such system needs improvement. This paper proposes an extractive Malayalam summarizer which reduces redundancy in summarized content and meaning of sentences are considered for summary generation. A semantic graph is created for entire document and summary generated by reducing graph using minimal spanning tree algorithm.",
"title": ""
},
{
"docid": "b756b71200a3d6be92526de18007aa2e",
"text": "This paper describes the result of a thorough analysis and evaluation of the so-called FIWARE platform from a smart application development point of view. FIWARE is the result of a series of wellfunded EU projects that is currently intensively promoted throughout public agencies in Europe and world-wide. The goal was to figure out how services provided by FIWARE facilitate the development of smart applications. It was conducted first by an analysis of the central components that make up the service stack, followed by the implementation of a pilot project that aimed on using as many of these services as possible.",
"title": ""
},
{
"docid": "14f539b7c27aeb96025045a660416e39",
"text": "This paper describes a method for the automatic self-calibration of a 3D Laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximising point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over thirty test sites.",
"title": ""
}
] |
scidocsrr
|
a4183290852eeff610385e7ca06ba566
|
Action Permissibility in Deep Reinforcement Learning and Application to Autonomous Driving
|
[
{
"docid": "2502fc02f09be72d138275a7ac41d8bc",
"text": "This manual describes the competition software for the Simulated Car Racing Championship, an international competition held at major conferences in the field of Evolutionary Computation and in the field of Computational Intelligence and Games. It provides an overview of the architecture, the instructions to install the software and to run the simple drivers provided in the package, the description of the sensors and the actuators.",
"title": ""
},
{
"docid": "be35c342291d4805d2a5333e31ee26d6",
"text": "References • We study efficient exploration in reinforcement learning. • Most provably-efficient learning algorithms introduce optimism about poorly understood states and actions. • Motivated by potential advantages relative to optimistic algorithms, we study an alternative approach: posterior sampling for reinforcement learning (PSRL). • This is the extension of the Thompson sampling algorithm for multi-armed bandit problems to reinforcement learning. • We establish the first regret bounds for this algorithm. Conceptually simple, separates algorithm from analysis: • PSRL selects policies according to the probability they are optimal without need for explicit construction of confidence sets. • UCRL2 bounds error in each s, a separately, which allows for worst-case mis-estimation to occur simultaneously in every s, a . • We believe this will make PSRL more statistically efficient.",
"title": ""
}
] |
[
{
"docid": "4d1ea9da68cc3498b413371f12c90433",
"text": "Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target dataset, resulting in the improvement of the target model. Though TL is found to be successful in the realm of imagebased applications, its impact and practical use in Natural Language Processing (NLP) applications is still a subject of research. Due to their hierarchical architecture, Deep Neural Networks (DNN) provide flexibility and customization in adjusting their parameters and depth of layers, thereby forming an apt area for exploiting the use of TL. In this paper, we report the results and conclusions obtained from extensive empirical experiments using a Convolutional Neural Network (CNN) and try to uncover thumb rules to ensure a successful positive transfer. In addition, we also highlight the flawed means that could lead to a negative transfer. We explore the transferability of various layers and describe the effect of varying hyper-parameters on the transfer performance. Also, we present a comparison of accuracy value and model size against state-of-the-art methods. Finally, we derive inferences from the empirical results and provide best practices to achieve a successful positive transfer.",
"title": ""
},
{
"docid": "bc2568e7b4bfaa3aebf424ecaad48c10",
"text": "With the increasing connection density of ICs, the bump pitch is growing smaller and smaller. The limitations of the conventional solder bumps are becoming more and more obvious due to the spherical geometry of the solder bumps. A novel interconnect structure - copper pillar bump with the structure of a non-reflowable copper pillar and a reflowable solder cap is one of the solutions to the problem. The scope of this paper covers flip chip assembly of the copper pillar bump soldered to lead free flip chip solder on the SAC substrate with bump pitch of 150mum. Reliability study result including high temperature storage (HTS) and temperature cycling (TC) would be detailed discussed in this paper.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "c06c13af6d89c66e2fa065534bfc2975",
"text": "Complex foldings of the vaginal wall are unique to some cetaceans and artiodactyls and are of unknown function(s). The patterns of vaginal length and cumulative vaginal fold length were assessed in relation to body length and to each other in a phylogenetic context to derive insights into functionality. The reproductive tracts of 59 female cetaceans (20 species, 6 families) were dissected. Phylogenetically-controlled reduced major axis regressions were used to establish a scaling trend for the female genitalia of cetaceans. An unparalleled level of vaginal diversity within a mammalian order was found. Vaginal folds varied in number and size across species, and vaginal fold length was positively allometric with body length. Vaginal length was not a significant predictor of vaginal fold length. Functional hypotheses regarding the role of vaginal folds and the potential selection pressures that could lead to evolution of these structures are discussed. Vaginal folds may present physical barriers, which obscure the pathway of seawater and/or sperm travelling through the vagina. This study contributes broad insights to the evolution of reproductive morphology and aquatic adaptations and lays the foundation for future functional morphology analyses.",
"title": ""
},
{
"docid": "089e1d2d96ae4ba94ac558b6cdccd510",
"text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.",
"title": ""
},
{
"docid": "c91ce9eb908d5a0fccc980f306ec0931",
"text": "Text Mining has become an important research area. Text Mining is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. In this paper, a Survey of Text Mining techniques and applications have been s presented.",
"title": ""
},
{
"docid": "9cbf4d0843196b1dcada6f60c0d0c2e8",
"text": "In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach.",
"title": ""
},
{
"docid": "339c367d71b4b51ad24aa59799b13416",
"text": "One of the biggest challenges of the current big data landscape is our inability to process vast amounts of information in a reasonable time. In this work, we explore and compare two distributed computing frameworks implemented on commodity cluster architectures: MPI/OpenMP on Beowulf that is high-performance oriented and exploits multi-machine/multicore infrastructures, and Apache Spark on Hadoop which targets iterative algorithms through in-memory computing. We use the Google Cloud Platform service to create virtual machine clusters, run the frameworks, and evaluate two supervised machine learning algorithms: KNN and Pegasos SVM. Results obtained from experiments with a particle physics data set show MPI/OpenMP outperforms Spark by more than one order of magnitude in terms of processing speed and provides more consistent performance. However, Spark shows better data management infrastructure and the possibility of dealing with other aspects such as node failure and data replication.",
"title": ""
},
{
"docid": "eb639439559f3e4e3540e3e98de7a741",
"text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.",
"title": ""
},
{
"docid": "5b0d5ebe7666334b09a1136c1cb2d8e4",
"text": "In this paper, lesion areas affected by anthracnose are segmented using segmentation techniques, graded based on percentage of affected area and neural network classifier is used to classify normal and anthracnose affected on fruits. We have considered three types of fruit namely mango, grape and pomegranate for our work. The developed processing scheme consists of two phases. In the first phase, segmentation techniques namely thresholding, region growing, K-means clustering and watershed are employed for separating anthracnose affected lesion areas from normal area. Then these affected areas are graded by calculating the percentage of affected area. In the second phase texture features are extracted using Runlength Matrix. These features are then used for classification purpose using ANN classifier. We have conducted experimentation on a dataset of 600 fruits’ image samples. The classification accuracies for normal and affected anthracnose fruit types are 84.65% and 76.6% respectively. The work finds application in developing a machine vision system in horticulture field.",
"title": ""
},
{
"docid": "fdc18ccdccefc1fd9c3f79daf549f015",
"text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.",
"title": ""
},
{
"docid": "66cd10e39a91fb421d1145b2ebe7246c",
"text": "Previous research suggests that heterosexual women's sexual arousal patterns are nonspecific; heterosexual women demonstrate genital arousal to both preferred and nonpreferred sexual stimuli. These patterns may, however, be related to the intense and impersonal nature of the audiovisual stimuli used. The current study investigated the gender specificity of heterosexual women's sexual arousal in response to less intense sexual stimuli, and also examined the role of relationship context on both women's and men's genital and subjective sexual responses. Assessments were made of 43 heterosexual women's and 9 heterosexual men's genital and subjective sexual arousal to audio narratives describing sexual or neutral encounters with female and male strangers, friends, or long-term relationship partners. Consistent with research employing audiovisual sexual stimuli, men demonstrated a category-specific pattern of genital and subjective arousal with respect to gender, while women showed a nonspecific pattern of genital arousal, yet reported a category-specific pattern of subjective arousal. Heterosexual women's nonspecific genital response to gender cues is not a function of stimulus intensity or relationship context. Relationship context did significantly affect women's genital sexual arousal--arousal to both female and male friends was significantly lower than to the stranger and long-term relationship contexts--but not men's. These results suggest that relationship context may be a more important factor in heterosexual women's physiological sexual response than gender cues.",
"title": ""
},
{
"docid": "f0a82f428ac508351ffa7b97bb909b60",
"text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "3ef8c2f3b2c18a91c23dad4f4cdd0e43",
"text": "Skeleton-based human action recognition has attracted a lot of research attention during the past few years. Recent works attempted to utilize recurrent neural networks to model the temporal dependencies between the 3D positional configurations of human body joints for better analysis of human activities in the skeletal data. The proposed work extends this idea to spatial domain as well as temporal domain to better analyze the hidden sources of action-related information within the human skeleton sequences in both of these domains simultaneously. Based on the pictorial structure of Kinect's skeletal data, an effective tree-structure based traversal framework is also proposed. In order to deal with the noise in the skeletal data, a new gating mechanism within LSTM module is introduced, with which the network can learn the reliability of the sequential data and accordingly adjust the effect of the input data on the updating procedure of the long-term context representation stored in the unit's memory cell. Moreover, we introduce a novel multi-modal feature fusion strategy within the LSTM unit in this paper. The comprehensive experimental results on seven challenging benchmark datasets for human action recognition demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ac65c09468cd88765009abe49d9114cf",
"text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.",
"title": ""
},
{
"docid": "8e9c65aea02ec48c96f74ae0407582e6",
"text": "With the wide penetration of mobile internet, social networking (SN) systems are becoming increasingly popular in the developing world. However, most SN sites are text heavy, and are therefore unusable by low-literate populations. Here we ask what would an SN application for low-literate users look like and how would it be used? We designed and deployed KrishiPustak, an audio-visual SN mobile application for low-literate farming populations in rural India. Over a four month deployment, 306 farmers registered through the phones of eight agricultural mediators making 514 posts and 180 replies. We conducted interviews with farmers and mediators and analyzed the content to understand system usage and to drive iterative design. The context of mediated use and agricultural framing had a powerful impact on system understanding (what it was for) and usage. Overall, KrishiPustak was useful and usable, but none-the-less we identify a number of design recommendations for similar SN systems.",
"title": ""
},
{
"docid": "78e631aceb9598767289c86ace415e2b",
"text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.",
"title": ""
},
{
"docid": "127b8dfb562792d02a4c09091e09da90",
"text": "Current approaches to conservation and natural-resource management often focus on single objectives, resulting in many unintended consequences. These outcomes often affect society through unaccounted-for ecosystem services. A major challenge in moving to a more ecosystem-based approach to management that would avoid such societal damages is the creation of practical tools that bring a scientifically sound, production function-based approach to natural-resource decision making. A new set of computer-based models is presented, the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) that has been designed to inform such decisions. Several of the key features of these models are discussed, including the ability to visualize relationships among multiple ecosystem services and biodiversity, the ability to focus on ecosystem services rather than biophysical processes, the ability to project service levels and values in space, sensitivity to manager-designed scenarios, and flexibility to deal with data and knowledge limitations. Sample outputs of InVEST are shown for two case applications; the Willamette Basin in Oregon and the Amazon Basin. Future challenges relating to the incorporation of social data, the projection of social distributional effects, and the design of effective policy mechanisms are discussed.",
"title": ""
},
{
"docid": "993cc233ad132a71c2fe093e267e4876",
"text": "-Deep learning has been applied to camera relocalization, in particular, PoseNet and its extended work are the convolutional neural networks which regress the camera pose from a single image. However there are many problems, one of them is expensive parameter selection. In this paper, we directly explore the three Euler angles as the orientation representation in the camera pose regressor. There is no need to select the parameter, which is not tolerant in the previous works. Experimental results on the 7 Scenes datasets and the King’s College dataset demonstrate that it has competitive performances.",
"title": ""
}
] |
scidocsrr
|
e6d610337df86ea3b88bb6468b94f6ff
|
The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "ecad85e4f9dbefd8d51313eeefeb8246",
"text": "T his report is a concise review of current knowledge of the structure and function of the intima of the aorta and the major distributing arteries. The main purpose of the review is to delineate normal arterial intima from atherosclerotic lesions and, in particular, to distinguish physiological adaptations from atherosclerotic increases in intimal thickness. To characterize normal intima, including the adaptive intimal thickenings, some of which represent locations in which atherosclerotic lesions are prone to develop, the structure, composition, and functions of the arterial intima in young people as well as in laboratory animals not subjected to known atherogenic stimuli are reviewed. This report on arterial intima is the first in a series of four. The second report will review and define initial, fatty streak, and intermediate types of atherosclerotic lesions, and the third report will review all types of advanced (i.e., potentially clinical and clinical) lesions. The overall objective is to define arterial intima and all types of atherosclerotic lesions, and then to postulate, in a fourth and final report, a valid and up-to-date pathobiological nomenclature and classification of atherosclerotic lesions.",
"title": ""
},
{
"docid": "c824b5274ce6afb54c58fae2dd68ff8f",
"text": "User modeling plays an important role in delivering customized web services to the users and improving their engagement. However, most user models in the literature do not explicitly consider the temporal behavior of users. More recently, continuous-time user modeling has gained considerable attention and many user behavior models have been proposed based on temporal point processes. However, typical point process-based models often considered the impact of peer influence and content on the user participation and neglected other factors. Gamification elements are among those factors that are neglected, while they have a strong impact on user participation in online services. In this article, we propose interdependent multi-dimensional temporal point processes that capture the impact of badges on user participation besides the peer influence and content factors. We extend the proposed processes to model user actions over the community-based question and answering websites, and propose an inference algorithm based on Variational-Expectation Maximization that can efficiently learn the model parameters. Extensive experiments on both synthetic and real data gathered from Stack Overflow show that our inference algorithm learns the parameters efficiently and the proposed method can better predict the user behavior compared to the alternatives.",
"title": ""
},
{
"docid": "3db1505c98ecb39ad11374d1a7a13ca3",
"text": "Distributed Denial-of-Service (DDoS) attacks are usually launched through the botnet, an “army” of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this paper, we offer basically three contributions: 1) we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; 2) we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time elapses) estimate of the botnet possibly hidden in the network; and 3) we verify the validity of the proposed inferential strategy on a test-bed environment. Our tests show that, for several scenarios of implementation, the proposed botnet identification algorithm needs an observation time in the order of (or even less than) 1 min to identify correctly almost all bots, without affecting the normal users’ activity.",
"title": ""
},
{
"docid": "919d1554ac7d18d5cb765c0ee808d3a6",
"text": "Pythium species were isolated from seedlings of strawberry with root and crown rot. The isolates were identified as P. helicoides on the basis of morphological characteristics and sequences of the ribosomal DNA internal transcribed spacer regions. In pathogenicity tests, the isolates caused root and crown rot similar to the original disease symptoms. Multiplex PCR was used to survey pathogen occurrence in strawberry production areas of Japan. Pythium helicoides was detected in 11 of 82 fields. The pathogen is distributed over six prefectures.",
"title": ""
},
{
"docid": "f92a7d9451f9d1213e9b1e479a4df006",
"text": "Cet article passe en revue les vingt dernieÁ res anne es de recherche sur la culture et la ne gociation et pre sente les progreÁ s qui ont e te faits, les pieÁ ges dont il faut se de fier et les perspectives pour de futurs travaux. On a remarque que beaucoup de recherches avaient tendance aÁ suivre ces deux modeÁ les implicites: (1) l'influence de la culture sur les strate gies et l'aboutissement de la ne gociation et/ou (2) l'interaction de la culture et d'autres aspects de la situation imme diate sur les re sultats de la ne gociation. Cette recherche a porte sur un grand nombre de cultures et a mis en e vidence plus d'un modeÁ le inte ressant. Nous signalons cependant trois pieÁ ge caracte ristiques de cette litte rature, pieÁ ges qui nous ont handicape s. Tout d'abord, la plupart des travaux se satisfont de de nominations ge ographiques pour de signer les cultures et il est par suite souvent impossible de de terminer les dimensions culturelles qui rendent compte des diffe rences observe es. Ensuite, beaucoup de recherches ignorent les processus psychologiques (c'est-aÁ -dire les motivations et les cognitions) qui sont en jeu dans les ne gociations prenant place dans des cultures diffe rentes si bien que nous apprenons peu de choses aÁ propos de la psychologie de la ne gociation dans des contextes culturels diversifie s. On se heurte ainsi aÁ une « boõà te noire » que les travaux sur la culture et la ne gociation se gardent ge ne ralement d'ouvrir. Enfin, notre travail n'a recense qu'un nombre restreint de variables situationnelles imme diates intervenant dans des ne gociations prenant place dans des cultures diffe rentes; notre compre hension des effets mode rateurs de la culture sur la ne gociation est donc limite e. Nous proposons un troisieÁ me modeÁ le, plus complet, de la culture et de la ne gociation, pre sentons quelques donne es re centes en sa faveur et esquissons quelques perspectives pour l'avenir.",
"title": ""
},
{
"docid": "9d1046d960724c193a29b7f387622c49",
"text": "Optimal cache content placement in a wireless small cell base station (sBS) with limited backhaul capacity is studied. The sBS has a large cache memory and provides content-level selective offloading by delivering high data rate contents to users in its coverage area. The goal of the sBS content controller (CC) is to store the most popular contents in the sBS cache memory such that the maximum amount of data can be fetched directly form the sBS, not relying on the limited backhaul resources during peak traffic periods. If the popularity profile is known in advance, the problem reduces to a knapsack problem. However, it is assumed in this work that, the popularity profile of the files is not known by the CC, and it can only observe the instantaneous demand for the cached content. Hence, the cache content placement is optimised based on the demand history. By refreshing the cache content at regular time intervals, the CC tries to learn the popularity profile, while exploiting the limited cache capacity in the best way possible. Three algorithms are studied for this cache content placement problem, leading to different exploitation-exploration trade-offs. We provide extensive numerical simulations in order to study the time-evolution of these algorithms, and the impact of the system parameters, such as the number of files, the number of users, the cache size, and the skewness of the popularity profile, on the performance. It is shown that the proposed algorithms quickly learn the popularity profile for a wide range of system parameters.",
"title": ""
},
{
"docid": "c70abd8598ef360dc6e9a10f46622003",
"text": "Removal of baseline wander is a crucial step in the signal conditioning stage of photoplethysmography signals. Hence, a method for removing the baseline wander from photoplethysmography based on two-stages of median filtering is proposed in this paper. Recordings from Physionet database are used to validate the proposed method. In this paper, the two-stage moving average filtering is also applied to remove baseline wander in photoplethysmography signals for comparison with our novel two-stage median filtering method. Our experiment results show that the performance of two-stage median filtering method is more effective in removing baseline wander from photoplethysmography signals. This median filtering method effectively improves the cross correlation with minimal distortion of the signal of interest. Although the method is proposed for baseline wander in photoplethysmography signals, it can be applied to other biomedical signals as well.",
"title": ""
},
{
"docid": "7e0815abae3af4d7bd5737bb004b5010",
"text": "The Neonatal Intensive Care Unit (NICU) represents a complex and multi-in/output context aimed at monitoring and controlling biological signals and parameters in premature newborns. This paper details some methodological and design options for developing technologies that allow end-user composition and control through new approaches that integrate wearable monitoring, pervasive and unobtrusive computing research that already are introducing new perspectives in a wide range of applications. These options enhance biosignals monitoring capabilities and provide consistent user experiences in environments where different devices, services and processes typically co-exist. In particular we describe the notion of assemblies of monitoring devices, interpreted as the combination of sensors, tools and services in a distributed monitoring environment where they interact. We report on the importance of flexibility and user control in the use of such technological assemblies in a NICU, describing a prototype and preliminary results of such monitoring system.",
"title": ""
},
{
"docid": "5591247b2e28f436da302757d3f82122",
"text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.",
"title": ""
},
{
"docid": "9c1591e811b5983167606728cac2331d",
"text": "Persuasive games and gamified systems are effective tools for motivating behavior change using various persuasive strategies. Research has shown that tailoring these systems can increase their efficacy. However, there is little knowledge on how game-based persuasive systems can be tailored to individuals of various personality traits. To advance research in this area, we conducted a large-scale study of 660 participants to investigate how different personalities respond to various persuasive strategies that are used in persuasive health games and gamified systems. Our results reveal that people's personality traits play a significant role in the perceived persuasiveness of different strategies. Conscientious people tend to be motivated by goal setting, simulation, self-monitoring and feedback; people who are more open to experience are more likely to be demotivated by rewards, competition, comparison, and cooperation. We contribute to the CHI community by offering design guidelines for tailoring persuasive games and gamified designs to a particular group of personalities.",
"title": ""
},
{
"docid": "09baf9c55e7ae35bdcf88742ecdc01d5",
"text": "This paper presents the experimental evaluation of a Bluetooth-based positioning system. The method has been implemented in a Bluetooth-capable handheld device. Empirical tests of the developed considered positioning system have been realized in different indoor scenarios. The range estimation of the positioning system is based on an approximation of the relation between the RSSI (Radio Signal Strength Indicator) and the associated distance between sender and receiver. The actual location estimation is carried out by using the triangulation method. The implementation of the positioning system in a PDA (Personal Digital Assistant) has been realized by using the Software Microsoft eMbedded Visual C++ Version 3.0.",
"title": ""
},
{
"docid": "ea5032f4e56361a7568fd3456676f04b",
"text": "Deep learning has recently seen rapid development and received significant attention due to its state-of-the-art performance on previously-thought hard problems. However, because of the internal complexity and nonlinear structure of deep neural networks, the underlying decision making processes for why these models are achieving such performance are challenging and sometimes mystifying to interpret. As deep learning spreads across domains, it is of paramount importance that we equip users of deep learning with tools for understanding when a model works correctly, when it fails, and ultimately how to improve its performance. Standardized toolkits for building neural networks have helped democratize deep learning; visual analytics systems have now been developed to support model explanation, interpretation, debugging, and improvement. We present a survey of the role of visual analytics in deep learning research, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How (Why, Who, What, How, When, and Where). We conclude by highlighting research directions and open research problems. This survey helps researchers and practitioners in both visual analytics and deep learning to quickly learn key aspects of this young and rapidly growing body of research, whose impact spans a diverse range of domains.",
"title": ""
},
{
"docid": "e0695a671b80b39f51c9f151677433a5",
"text": "Various powerful polyhedral techniques exist to optimize computation intensive programs effectively. Applying these techniques on any non-trivial program is still surprisingly difficult and often not as effective as expected. Most polyhedral tools are limited to a specific programming language. Even for this language, relevant code needs to match specific syntax that rarely appears in existing code. It is therefore hard or even impossible to process existing programs automatically. In addition, most tools target C or OpenCL code, which prevents effective communication with compiler internal optimizers. As a result target architecture specific optimizations are either little effective or not approached at all. In this paper we present Polly, a project to enable polyhedral optimizations in LLVM. Polly automatically detects and transforms relevant program parts in a language-independent and syntactically transparent way. Therefore, it supports programs written in most common programming languages and constructs like C++ iterators, goto based loops and pointer arithmetic. Internally it provides a state-of-the-art polyhedral library with full support for Z-polyhedra, advanced data dependency analysis and support for external optimizers. Polly includes integrated SIMD and OpenMP code generation. Through LLVM, machine code for CPUs and GPU accelerators, C source code and even hardware descriptions can be targeted.",
"title": ""
},
{
"docid": "dcada3c12fb14b454964b97b8541b69d",
"text": "nce ch; n ple iray r. In hue 003 Abstract. We present a comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors. These two algorithms share a common approach to color equalization, but different computational models. We introduce the two models focusing on differences and common points. An analysis of their computational characteristics illustrates the way the Retinex approach has influenced ACE structure, and which aspects of the first algorithm have been modified in the second one and how. Their interesting equalization properties, like lightness and color constancy, image dynamic stretching, global and local filtering, and data driven dequantization, are qualitatively and quantitatively presented and compared, together with their ability to mimic the human visual system. © 2004 SPIE and IS&T. [DOI: 10.1117/1.1635366]",
"title": ""
},
{
"docid": "03fcf9cd39c516332be9f10ee948a07f",
"text": "Cloud application performance is heavily reliant on the hit rate of datacenter key-value caches. Key-value caches typically use least recently used (LRU) as their eviction policy, but LRU’s hit rate is far from optimal under real workloads. Prior research has proposed many eviction policies that improve on LRU, but these policies make restrictive assumptions that hurt their hit rate, and they can be difficult to implement efficiently. We introduce least hit density (LHD), a novel eviction policy for key-value caches. LHD predicts each object’s expected hits-per-space-consumed (hit density), filtering objects that contribute little to the cache’s hit rate. Unlike prior eviction policies, LHD does not rely on heuristics, but rather rigorously models objects’ behavior using conditional probability to adapt its behavior in real time. To make LHD practical, we design and implement RankCache, an efficient key-value cache based on memcached. We evaluate RankCache and LHD on commercial memcached and enterprise storage traces, where LHD consistently achieves better hit rates than prior policies. LHD requires much less space than prior policies to match their hit rate, on average 8× less than LRU and 2–3× less than recently proposed policies. Moreover, RankCache requires no synchronization in the common case, improving request throughput at 16 threads by 8× over LRU and by 2× over CLOCK.",
"title": ""
},
{
"docid": "ab101c577fcdefb7ed09b02c563ccdf4",
"text": "Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show— theoretically, via simulation, and through experiments on real user data—that de-identified web browsing histories can be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world e↵ectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on su ciently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is—to our knowledge—the largestscale demonstrated de-anonymization to date. CCS Concepts •Security and privacy ! Pseudonymity, anonymity and untraceability; •Information systems ! Online advertising; Social networks; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. c 2017 ACM. ISBN TDB. DOI: TBD",
"title": ""
},
{
"docid": "de394e291cac1a56cb19d858014bff19",
"text": "The design of antennas for metal-mountable radio-frequency identification tags is driven by a unique set of challenges: cheap, small, low-profile, and conformal structures need to provide reliable operation when tags are mounted on conductive platforms of various shapes and sizes. During the past decade, a tremendous amount of research has been dedicated to meeting these stringent requirements. Currently, the tag-reading ranges of several meters are achieved with flexible-label types of tags. Moreover, a whole spectrum of tag-size performance ratios has been demonstrated through a variety of innovative antenna-design approaches. This article reviews and summarizes the progress made in antennas for metal-mountable tags, and presents future prospects.",
"title": ""
},
{
"docid": "4faa5fd523361d472fc0bea8508c58f8",
"text": "This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] |
scidocsrr
|
e775df361e78aab2c410d37c3eccf4c1
|
Just DIAL: DomaIn Alignment Layers for Unsupervised Domain Adaptation
|
[
{
"docid": "a457545baa59e39e6ef6d7e0d2f9c02e",
"text": "The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.",
"title": ""
},
{
"docid": "957e103d533b3013e24aebd3617edd87",
"text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.",
"title": ""
}
] |
[
{
"docid": "14508a81494077406b90632d38e09d44",
"text": "During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.",
"title": ""
},
{
"docid": "f365152720379f8c94398b3d7284f828",
"text": "In this work, we present view-point invariant person re-identification (Re-ID) by multi-modal feature fusion of 3D soft biometric cues. We exploit the MS KinectTM sensor v.2, to collect the skeleton points from the walking subjects and leverage both the anthropometric features and the gait features associated with the person. The key proposals of the paper are two fold: First, we conduct an extensive study of the influence of various features both individually and jointly (by fusion technique), on the person Re-ID. Second, we present an actual demonstration of the view-point invariant Re-ID paradigm, by analysing the subject data collected in different walking directions. Focusing the latter, we further analyse three different categories which we term as pseudo, quasi and full view-point invariant scenarios, and evaluate our system performance under these various scenarios. Initial pilot studies were conducted on a new set of 20 people, collected at the host laboratory. We illustrate, for the first time, gait-based person re-identification with truly view-point invariant behaviour, i.e. the walking direction of the probe sample being not represented in the gallery samples.",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "4c0869847079b11ec8e0a6b9714b2d09",
"text": "This paper provides a tutorial overview of the latest generation of passive optical network (PON) technology standards nearing completion in ITU-T. The system is termed NG-PON2 and offers a fiber capacity of 40 Gbit/s by exploiting multiple wavelengths at dense wavelength division multiplexing channel spacing and tunable transceiver technology in the subscriber terminals (ONUs). Here, the focus is on the requirements from network operators that are driving the standards developments and the technology selection prior to standardization. A prestandard view of the main physical layer optical specifications is also given, ahead of final ITU-T approval.",
"title": ""
},
{
"docid": "21916d34fb470601fb6376c4bcd0839a",
"text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.",
"title": ""
},
{
"docid": "f5d700e2e53b402ddf036df7ff546db9",
"text": "The present report accomplishes three goals. First, to provide an empirical rationale for placing parental monitoring of children's adaptations as a key construct in development and prevention research. Second, to stimulate more research on parental monitoring and provide an integrative framework for various research traditions as well as developmental periods of interest. Third, to discuss current methodological issues that are developmentally and culturally sensitive and based on sound measurement. Possible intervention and prevention strategies that specifically target parental monitoring are discussed.",
"title": ""
},
{
"docid": "c9d801183e3629e6231f48b180c5ee4e",
"text": "This paper presents a robust watermarking algorithm with informed detection for 3D polygonal meshes. The algorithm is based on our previous algorithm [22] that employs mesh-spectral analysis to modify mesh shapes in their transformed domain. This paper presents extensions to our previous algorithm so that (1) much larger meshes can be watermarked within a reasonable time, and that (2) the watermark is robust against connectivity alteration (e.g., mesh simplification), and that (3) the watermark is robust against attacks that combine similarity transformation with such other attacks as cropping, mesh simplification, and smoothing. Experiment showed that our new watermarks are resistant against mesh simplification and remeshing combined with resection, similarity transformation, and other operations..",
"title": ""
},
{
"docid": "42c560f2f0e5756f608c4d73b224d055",
"text": "Recommendation systems support users in finding items of interest. In this chapter, we introduce the basic approaches of collaborative filtering, contentbased filtering, and knowledge-based recommendation. We first discuss principles of the underlying algorithms based on a running example. Thereafter, we provide an overview of hybrid recommendation approaches which combine basic variants. We conclude this chapter with a discussion of newer algorithmic trends, especially critiquing-based and group recommendation.",
"title": ""
},
{
"docid": "736ee2bed70510d77b1f9bb13b3bee68",
"text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.",
"title": ""
},
{
"docid": "d63d849c5323cb1c97c15080247982d5",
"text": "Tampereen ammattikorkeakoulu Tampere University of Applied Sciences Degree Programme in International Business JÄRVENSIVU, VEERA: Social Media Marketing Plan for a SME Bachelor's thesis 53 pages, of which appendices 31 pages October 2017 The aim of this bachelor’s thesis was to create an efficient, low-cost social media marketing plan for a small clothing company called Nikitrade. The data gathered for establishing the marketing plan were mainly secondary data consisting of multiple books and articles related to the topic. For qualitative data gathering, interviews and discussions with the company owners were used. Because of the competitive sensitiveness of the subject, the social marketing plan itself is not published. The thesis report includes the important factors for establishing a marketing plan for a small or medium sized enterprise. The marketing plan explains the importance of a thorough analysis of the current situation, both internal and external. It also introduces strategies that should be established in order to create an efficient marketing plan. Lastly, it explains the importance of metrics and measuring the success of reaching the objectives. For discussion, the author of the thesis has gathered the key points of the social media marketing plan she has created. The most important issues of social media marketing are staying consistent in activity, quality and visuals.",
"title": ""
},
{
"docid": "844e63276bb4f160ae30baec9cace21c",
"text": "This paper reviews the development of advanced System-on-Package (SOP) architectures for the compact and low cost wireless RF wireless systems. A compact stacked patch antenna adopting soft-and-hard surface structures and cavity resonator filters using inter-resonance coupling mechanism for V-band applications are presented. A novel ultra-compact 3D integration technology is proposed and utilized for the implementation of a Ku-band VCO module. The high Q-factor inductors fabricated on the Liquid Crystal Polymer based multilayer substrate demonstrate superior performance than conventional organic packages.",
"title": ""
},
{
"docid": "06ff54cb5c44fdc49000f6c1b5a2bf01",
"text": "Ego-disturbances have been a topic in schizophrenia research since the earliest clinical descriptions of the disorder. Manifesting as a feeling that one's \"self,\" \"ego,\" or \"I\" is disintegrating or that the border between one's self and the external world is dissolving, \"ego-disintegration\" or \"dissolution\" is also an important feature of the psychedelic experience, such as is produced by psilocybin (a compound found in \"magic mushrooms\"). Fifteen healthy subjects took part in this placebo-controlled study. Twelve-minute functional MRI scans were acquired on two occasions: subjects received an intravenous infusion of saline on one occasion (placebo) and 2 mg psilocybin on the other. Twenty-two visual analogue scale ratings were completed soon after scanning and the first principal component of these, dominated by items referring to \"ego-dissolution\", was used as a primary measure of interest in subsequent analyses. Employing methods of connectivity analysis and graph theory, an association was found between psilocybin-induced ego-dissolution and decreased functional connectivity between the medial temporal lobe and high-level cortical regions. Ego-dissolution was also associated with a \"disintegration\" of the salience network and reduced interhemispheric communication. Addressing baseline brain dynamics as a predictor of drug-response, individuals with lower diversity of executive network nodes were more likely to experience ego-dissolution under psilocybin. These results implicate MTL-cortical decoupling, decreased salience network integrity, and reduced inter-hemispheric communication in psilocybin-induced ego disturbance and suggest that the maintenance of \"self\"or \"ego,\" as a perceptual phenomenon, may rest on the normal functioning of these systems.",
"title": ""
},
{
"docid": "c46d7018ecca531dad19013496ef95a1",
"text": "A new method of logo detection in document images is proposed in this paper. It is based on the boundary extension of feature rectangles of which the definition is also given in this paper. This novel method takes advantage of a layout assumption that logos have background (white spaces) surrounding it in a document. Compared with other logo detection methods, this new method has the advantage that it is independent on logo shapes and very fast. After the logo candidates are detected, a simple decision tree is used to reduce the false positive from the logo candidate pool. We have tested our method on a public image database involving logos. Experiments show that our method is more precise and robust than the previous methods and is well qualified as an effective assistance in document retrieval.",
"title": ""
},
{
"docid": "e53a8e3e7664f66cce0593ea6f8a2443",
"text": "In real world social networks, there are multiple cascades which are rarely independent. They usually compete or cooperate with each other. Motivated by the reinforcement theory in sociology we leverage the fact that adoption of a user to any behavior is modeled by the aggregation of behaviors of its neighbors. We use a multidimensional marked Hawkes process to model users product adoption and consequently spread of cascades in social networks. The resulting inference problem is proved to be convex and is solved in parallel by using the barrier method. The advantage of the proposed model is twofold; it models correlated cascades and also learns the latent diffusion network. Experimental results on synthetic and two real datasets gathered from Twitter, URL shortening and music streaming services, illustrate the superior performance of the proposed model over the alternatives. Introduction Social networks and virtual communities play a key role in today’s life. People share their thoughts, beliefs, opinions, news, and even their locations in social networks and engage in social interactions by commenting, liking, mentioning and following each other. This virtual world is an ideal place for studying social behaviors and spread of cultural norms (Vespignani 2012), contagion of diseases (Barabasi 2015), advertising and marketing (Valera and Rodriguez 2015) and estimating the culprit in malicious diffusions (Farajtabar et al. 2015a). Among them, the study of information diffusion or more generally dynamics on the network is of crucial importance and can be used in many applications. The trace of information diffusion, virus or infection spread, rumor propagation, and product adoption is usually called cascades. In conventional studies of diffusion networks, individual cascades are mostly considered in isolation, i.e., independent of each other (Rodriguez et al. 2015). However in realistic situations, they are rarely independent and can be competitive, when a URL shortening service become popular the others receive less attention; or cooperative, when usage of Google Play Music correlates with that of Youtube due to, for example, simultaneous arrival of new albums (Fig. 1). Modeling multiple cascades which are correlated to each other is a challenging problem. Considerable work have Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 200 400 600 Time (hr) 0 50",
"title": ""
},
{
"docid": "3692954147d1a60fb683001bd379047f",
"text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.",
"title": ""
},
{
"docid": "a7a51eb9cb434a581eac782da559094b",
"text": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.",
"title": ""
},
{
"docid": "49b6bfaa3f681329522b5d8dd1277e97",
"text": "Pipeline-based applications have become an integral part of life. However, knowing that the pipeline systems can be largely deployed in an inaccessible and hazardous environment, active monitoring and frequent inspection of the pipeline systems are highly expensive using the traditional maintenance systems. Robot agents have been considered as an attractive alternative. Although many different types of pipeline exploration robots have been proposed, they were suffered from various limitations. In this paper, we present the design and implementation of a single-moduled fully autonomous mobile pipeline exploration robot, called FAMPER, that can be used for the inspection of 150mm pipelines. This robot consists of four wall-press caterpillars operated by two DC motors each. The speed of each caterpillar is controlled independently to provide steering capability to go through 45 degree elbows, 90 degree elbows, T-branches, and Y-branches. The uniqueness of this paper is to show the opportunity of using 4 caterpillar configuration for superior performance in all types of complex networks of pipelines. The robot system has been developed and experimented in different pipeline layouts.",
"title": ""
},
{
"docid": "372ce38b93c2b3234281e2806aa3bc76",
"text": "Sorting a list of input numbers is one of the most fundamental problems in the field of computer science in general and high-throughput database applications in particular. Although literature abounds with various flavors of sorting algorithms, different architectures call for customized implementations to achieve faster sorting times. This paper presents an efficient implementation and detailed analysis of MergeSort on current CPU architectures. Our SIMD implementation with 128-bit SSE is 3.3X faster than the scalar version. In addition, our algorithm performs an efficient multiway merge, and is not constrained by the memory bandwidth. Our multi-threaded, SIMD implementation sorts 64 million floating point numbers in less than 0.5 seconds on a commodity 4-core Intel processor. This measured performance compares favorably with all previously published results. Additionally, the paper demonstrates performance scalability of the proposed sorting algorithm with respect to certain salient architectural features of modern chip multiprocessor (CMP) architectures, including SIMD width and core-count. Based on our analytical models of various architectural configurations, we see excellent scalability of our implementation with SIMD width scaling up to 16X wider than current SSE width of 128-bits, and CMP core-count scaling well beyond 32 cores. Cycle-accurate simulation of Intel’s upcoming x86 many-core Larrabee architecture confirms scalability of our proposed algorithm.",
"title": ""
},
{
"docid": "536e45f7130aa40625e3119523d2e1de",
"text": "We consider the problem of Simultaneous Localization and Mapping (SLAM) from a Bayesian point of view using the Rao-Blackwellised Particle Filter (RBPF). We focus on the class of indoor mobile robots equipped with only a stereo vision sensor. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and reliable motion estimates. Landmark estimates are derived from stereo vision and motion estimates are based on visual odometry. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). Our work defers from current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. We present results that show that our model is a successful approach for vision-based SLAM, even in large environments. We validate our approach experimentally, producing the largest and most accurate vision-based map to date, while we identify the areas where future research should focus in order to further increase its accuracy and scalability to significantly larger",
"title": ""
},
{
"docid": "44e0cd40b9a06abd5a4e54524b214dce",
"text": "A large majority of road accidents are relative to driver fatigue, distraction and drowsiness which are widely believed to be the largest contributors to fatalities and severe injuries, either as a direct cause of falling asleep at the wheel or as a contributing factor in lowering the attention and reaction time of a driver in critical situations. Thus to prevent road accidents, a countermeasure device has to be used. This paper illuminates and highlights the various measures that have been studied to detect drowsiness such as vehicle based, physiological based, and behavioural based measures. The main objective is to develop a real time non-contact system which will be able to identify driver’s drowsiness beforehand. The system uses an IR sensitive monochrome camera that detects the position and state of the eyes to calculate the drowsiness of a driver. Once the driver is detected as drowsy, the system will generate warning signals to alert the driver. In case the signal is not re-established the system will shut off the engine to prevent any mishap. Keywords— Drowsiness, Road Accidents, Eye Detection, Face Detection, Blink Pattern, PERCLOS, MATLAB, Arduino Nano",
"title": ""
}
] |
scidocsrr
|
87b8728b5d1ed72862e670538f5b5d11
|
Identifying social roles in reddit using network structure
|
[
{
"docid": "d5142a032ebff4b256beb566273cc41a",
"text": "To understand the structural dynamics of a large-scale social, biological or technological network, it may be useful to discover behavioral roles representing the main connectivity patterns present over time. In this paper, we propose a scalable non-parametric approach to automatically learn the structural dynamics of the network and individual nodes. Roles may represent structural or behavioral patterns such as the center of a star, peripheral nodes, or bridge nodes that connect different communities. Our novel approach learns the appropriate structural role dynamics for any arbitrary network and tracks the changes over time. In particular, we uncover the specific global network dynamics and the local node dynamics of a technological, communication, and social network. We identify interesting node and network patterns such as stationary and non-stationary roles, spikes/steps in role-memberships (perhaps indicating anomalies), increasing/decreasing role trends, among many others. Our results indicate that the nodes in each of these networks have distinct connectivity patterns that are non-stationary and evolve considerably over time. Overall, the experiments demonstrate the effectiveness of our approach for fast mining and tracking of the dynamics in large networks. Furthermore, the dynamic structural representation provides a basis for building more sophisticated models and tools that are fast for exploring large dynamic networks.",
"title": ""
},
{
"docid": "ddb2ba1118e28acf687208bff99ce53a",
"text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.",
"title": ""
}
] |
[
{
"docid": "ae83a2258907f00500792178dc65340d",
"text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.",
"title": ""
},
{
"docid": "4f68e4859a717833d214a431b8d796ad",
"text": "Time domain synchronous OFDM (TDS-OFDM) has higher spectral efficiency than cyclic prefix OFDM (CP-OFDM), but suffers from severe performance loss over fast fading channels. In this paper, a novel transmission scheme called time-frequency training OFDM (TFT-OFDM) is proposed. The time-frequency joint channel estimation for TFT-OFDM utilizes the time-domain training sequence without interference cancellation to merely acquire the time delay profile of the channel, while the path coefficients are estimated by using the frequency-domain group pilots. The redundant group pilots only occupy about 1% of the useful subcarriers, thus TFT-OFDM still has much higher spectral efficiency than CP-OFDM by about 10%. Simulation results also demonstrate that TFT-OFDM outperforms CP-OFDM and TDS-OFDM over time-varying channels.",
"title": ""
},
{
"docid": "8ad57ca3fa0063033fae25e4bad0a90e",
"text": "The neural network, using an unsupervised generalized Hebbian algorithm (GHA), is adopted to find the principal eigenvectors of a covariance matrix in different kinds of seismograms. We have shown that the extensive computer results of the principal components analysis (PCA) using the neural net of GHA can extract the information of seismic reflection layers and uniform neighboring traces. The analyzed seismic data are the seismic traces with 20-, 25-, and 30-Hz Ricker wavelets, the fault, the reflection and diffraction patterns after normal moveout (NMO) correction, the bright spot pattern, and the real seismogram at Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal can be shown from the projections on the principal eigenvectors. For PCA, a theorem is proposed, which states that adding an extra point along the direction of the existing eigenvector can enhance that eigenvector. The theorem is applied to the interpretation of a fault seismogram and the uniform property of other seismograms. The PCA also provides a significant seismic data compression.",
"title": ""
},
{
"docid": "f334f49a1e21e3278c25ca0d63b2ef8a",
"text": "We show that if (J,,} is a sequence of uniformly LI-bounded functions on a measure space, and if.f, -fpointwise a.e., then lim,,_(I{lf,, 1 -IIf,, fII) If I,' for all 0 < p < oc. This result is also generalized in Theorem 2 to some functionals other than the L P norm, namely I. /( J,, -(f, f) f ) -1 0 for suitablej: C -C and a suitable sequence (fJ}. A brief discussion is given of the usefulness of this result in variational problems.",
"title": ""
},
{
"docid": "a3e8dd1f3fbca95857a96c0635eb60c6",
"text": "Many maximum power point tracking techniques for photovoltaic systems have been developed to maximize the produced energy and a lot of these are well established in the literature. These techniques vary in many aspects as: simplicity, convergence speed, digital or analogical implementation, sensors required, cost, range of effectiveness, and in other aspects. This paper presents a comparative study of ten widely-adopted MPPT algorithms; their performance is evaluated on the energy point of view, by using the simulation tool Simulink®, considering different solar irradiance variations. Key-Words: Maximum power point (MPP), maximum power point tracking (MPPT), photovoltaic (PV), comparative study, PV Converter.",
"title": ""
},
{
"docid": "8e7d3462f93178f6c2901a429df22948",
"text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.",
"title": ""
},
{
"docid": "7525b24d3e0c6332cdc3eb58c7677b63",
"text": "OBJECTIVE\nTo compare the efficacy of 2 intensified insulin regimens, continuous subcutaneous insulin infusion (CSII) and multiple daily injections (MDI), by using the short-acting insulin analog lispro in type 1 diabetic patients.\n\n\nRESEARCH DESIGN AND METHODS\nA total of 41 C-peptide-negative type 1 diabetic patients (age 43.5+/-10.3 years; 21 men and 20 women, BMI 24.0+/-2.4 kg/m2, diabetes duration 20.0+/-11.3 years) on intensified insulin therapy (MDI with regular insulin or lispro, n = 9, CSII with regular insulin, n = 32) were included in an open-label randomized crossover study comparing two 4-month periods of intensified insulin therapy with lispro: one period by MDI and the other by CSII. Blood glucose (BG) was monitored before and after each of the 3 meals each day.\n\n\nRESULTS\nThe basal insulin regimen had to be optimized in 75% of the patients during the MDI period (mean number of NPH injections per day = 2.65). HbA1c values were lower when lispro was used in CSII than in MDI (7.89+/-0.77 vs. 8.24+/-0.77%, P<0.001). BG levels were lower with CSII (165+/-27 vs. 175+/-33 mg/dl, P<0.05). The SD of all the BG values (73+/-15 vs. 82+/-18 mg/dl, P<0.01) was lower with CSII. The frequency of hypoglycemic events, defined as BG levels <60 mg/dl, did not differ significantly between the 2 modalities (CSII 3.9+/-4.2 per 14 days vs. MDI 4.3+/-3.9 per 14 days). Mean insulin doses were significantly lower with CSII than with MDI (38.5+/-9.8 vs. 47.3+/-14.9 U/day. respectively, P< 0.0001).\n\n\nCONCLUSIONS\nWhen used with external pumps versus MDI, lispro provides better glycemic control and stability with much lower doses of insulin and does not increase the frequency of hypoglycemic episodes.",
"title": ""
},
{
"docid": "63675958e32335662aac39b6b2a1adec",
"text": "This paper presents a simple, low-cost, hours-long fabrication method for microwave waveguide components of high RF performance. The technique combines 3-D-printed configurations with liquid metal waveguide structures. As a demonstration, a fused deposition modeling multimaterial 3-D consumer-grade printer and liquid gallium were used. A conductive polylactic acid (PLA) waveguide flange was 3-D printed along, in-one-go, with standard PLA for the rectangular waveguide liquid metal enclosures. Microwave WR62 waveguides, resonators, and filters operating in Ku-band were designed, fabricated, and tested. The RF performance of the fabricated waveguide devices is in agreement with the simulations demonstrating better than 1.29 dB/m attenuation in the waveguide and better than 1000 $Q$ -factor for the resonator and the filter at 13 GHz. The fabricated devices demonstrate a new option of an economical fabrication technology for high RF performance microwave waveguide-based devices that can be delivered in hours-time, anywhere, anytime with minimal equipment deployment and investment.",
"title": ""
},
{
"docid": "d9c7549c2fe3541c49d59d7dc6395050",
"text": "In this chapter, we will review the underlying mechanisms for the evolution of wireless communication networks. We will first discuss macro-cellular technologies used in traditional telecommunication systems, and then introduce some micro-cellular technologies as a recent advance in the telecommunications industry. Finally, we will describe existing interworking techniques available in literature and in standardization, including loosely and tightly coupled, I-WLAN and IEEE 802.21. The term macro-cell is used to describe cells with larger sizes. A macro-cell is a cell in mobile phone networks that provide radio coverage served by a high power cellular base station. The antennas for macro-cells are mounted on ground-based masts and other existing structures, at a height that provides a clear view over the surrounding buildings and terrain. Macro-cell base stations have power outputs of typically tens of watts [18]. Most wireless communication systems maintained by traditional mobile network operators are powered by macro-cellular technologies. In the 1980s, the 1G wireless communication system came to the mobile communication environment, which provided a data speed of 2.4 Kbps to support data communication with mobile phones. An example is Nordic Mobile Telephone (NMT). However, this generation still worked in analog system and there were tight limitations in terms of the system capacity and data rate.",
"title": ""
},
{
"docid": "c9171bf5a2638b35ff7dc9c8e6104d30",
"text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.",
"title": ""
},
{
"docid": "aa6a22096c633072b1e362f20e18a4e4",
"text": "In this paper, we propose a new deep framework which predicts facial attributes and leverage it as a soft modality to improve face identification performance. Our model is an end to end framework which consists of a convolutional neural network (CNN) whose output is fanned out into two separate branches; the first branch predicts facial attributes while the second branch identifies face images. Contrary to the existing multi-task methods which only use a shared CNN feature space to train these two tasks jointly, we fuse the predicted attributes with the features from the face modality in order to improve the face identification performance. Experimental results show that our model brings benefits to both face identification as well as facial attribute prediction performance, especially in the case of identity facial attributes such as gender prediction. We tested our model on two standard datasets annotated by identities and face attributes. Experimental results indicate that the proposed model outperforms most of the current existing face identification and attribute prediction methods.",
"title": ""
},
{
"docid": "692174cc5dd763333cebbea576c8930b",
"text": "The Histograms of Oriented Gradients (HOG) descriptor represents shape information by storing the local gradients in an image. The Haar wavelet transform is a simple yet powerful technique that can separately enhance the horizontal and vertical local features in an image. In this paper, we enhance the HOG descriptor by subjecting the image to the Haar wavelet transform and then computing HOG from the result in a manner that enriches the shape information encoded in the descriptor. First, we define the novel HaarHOG descriptor for grayscale images and extend this idea for color images. Second, we compare the image recognition performance of the HaarHOG descriptor with the traditional HOG descriptor in four different color spaces and grayscale. Finally, we compare the image classification performance of the HaarHOG descriptor with some popular descriptors used by other researchers on four grand challenge datasets.",
"title": ""
},
{
"docid": "27cb4869713ddbd3100fd4ca89002cfb",
"text": "Simulations of Very-low-frequency (VLF) transmitter signals are conducted using three models: the long-wave propagation capability, a finite-difference (FD) time-domain model, and an FD frequency-domain model. The FD models are corrected using Richardson extrapolation to minimize the numerical dispersion inherent in these models. Using identical ionosphere and ground parameters, the three models are shown to agree very well in their simulated VLF signal amplitude and phase, to within 1 dB of amplitude and a few degrees of phase, for a number of different simulation paths and transmitter frequencies. Furthermore, the three models are shown to produce comparable phase changes for the same ionosphere perturbations, again to within a few degrees. Finally, we show that the models reproduce the phase data of existing VLF transmitter–receiver pairs reasonably well, although the nighttime variation in the measured phase data is not captured by the simplified characterization of the ionosphere.",
"title": ""
},
{
"docid": "2f88356c3a1ab60e3dd084f7d9630c70",
"text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.",
"title": ""
},
{
"docid": "f7a1eaa86a81b104a9ae62dc87c495aa",
"text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.",
"title": ""
},
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
},
{
"docid": "db9c295bc9689012eca7c73d23fb6ed3",
"text": "OBJECTIVE\nTo describe and analyze a middle ear condition in which the steady state of the middle ear pressure is elevated above the atmospheric pressure. SETTING AND STUDY DESIGN: This is a long-term survey of 59 patients from a private clinic who were observed on routine examination to have a ballooned out (hyperinflated) tympanic membrane.\n\n\nINTERVENTION\nAll patients underwent hearing tests, tympanometry, and Shullers (lateral) mastoid radiography.\n\n\nMAIN OUTCOME MEASURES\nA hyperinflated tympanic membrane indicates a middle ear pressure that is higher than atmospheric pressure. The ballooned tympanic membrane returns to its physiological level after being punctured. This pressure situation is the reverse or opposite of atelectasis and is therefore termed hyperectasis. Hyperectasis, like atelectasis, is associated with a poorly pneumatized mastoid.\n\n\nRESULTS\nFifty-nine hyperectatic ears persisted in their hyperinflated state for weeks, months, or even years. The hyperectasis was preceded by atelectasis, and both conditions occasionally changed one into the other. The ballooned part of the tympanic membrane is usually thin and \"scarred.\" Hyperectasis is not a rare situation and, once recognized, can be readily encountered in an otologic clinic.\n\n\nCONCLUSIONS\nLike most biologic systems (e.g., blood pressure, temperature), the middle ear's central feature, i.e., pressure, also has a dynamic character vacillating up and down. It is conceivable that middle ear pressure is also actively regulated and controlled with the aid of a feedback mechanism. Passage of gas through the eustachian tube or absorption by diffusion-perfusion is also at least partly an active process. The up and down middle ear pressure vacillations are usually clinically benign and do not lead to any pathologic features as long as they are buffered by an accompanying normal mastoid pneumatization. It is the ear with a nonpneumatized mastoid that has limited ability to buffer pressure changes and that will present as an atelectasis, a retraction pocket, or (eventually a cholesteatoma) or their reverse, a hyperectatic tympanic membrane.",
"title": ""
},
{
"docid": "ef7f9c381e9d801ca97757e7dbadf439",
"text": "An isolated three-port bidirectional dc-dc converter composed of three full-bridge cells and a high-frequency transformer is proposed in this paper. Besides the phase shift control managing the power flow between the ports, utilization of the duty cycle control for optimizing the system behavior is discussed and the control laws ensuring the minimum overall system losses are studied. Furthermore, the dynamic analysis and associated control design are presented. A control-oriented converter model is developed and the Bode plots of the control-output transfer functions are given. A control strategy with the decoupled power flow management is implemented to obtain fast dynamic response. Finally, a 1.5 kW prototype has been built to verify all theoretical considerations. The proposed topology and control is particularly relevant to multiple voltage electrical systems in hybrid electric vehicles and renewable energy generation systems.",
"title": ""
},
{
"docid": "c221568e2ed4d6192ab04119046c4884",
"text": "An efficient Ultra-Wideband (UWB) Frequency Selective Surface (FSS) is presented to mitigate the potential harmful effects of Electromagnetic Interference (EMI) caused by the radiations emitted by radio devices. The proposed design consists of circular and square elements printed on the opposite surfaces of FR4 substrate of 3.2 mm thickness. It ensures better angular stability by up to 600, bandwidth has been significantly enhanced by up to 16. 21 GHz to provide effective shielding against X-, Ka- and K-bands. While signal attenuation has also been improved remarkably in the desired band compared to the results presented in the latest research. Theoretical results are presented for TE and TM polarization for normal and oblique angles of incidence.",
"title": ""
},
{
"docid": "9f2519984f011fc2445f3c394573d8d8",
"text": "Previously published results showed that both in vitro and in vivo coconut oil (CNO) treatments prevented combing damage of various hair types. Using the same methodology, an attempt was made to study the properties of mineral oil and sunflower oil on hair. Mineral oil (MO) was selected because it is extensively used in hair oil formulations in India, because it is non-greasy in nature, and because it is cheaper than vegetable oils like coconut and sunflower oils. The study was extended to sunflower oil (SFO) because it is the second most utilized base oil in the hair oil industry on account of its non-freezing property and its odorlessness at ambient temperature. As the aim was to cover different treatments, and the effect of these treatments on various hair types using the above oils, the number of experiments to be conducted was a very high number and a technique termed as the Taguchi Design of Experimentation was used. The findings clearly indicate the strong impact that coconut oil application has to hair as compared to application of both sunflower and mineral oils. Among three oils, coconut oil was the only oil found to reduce the protein loss remarkably for both undamaged and damaged hair when used as a pre-wash and post-wash grooming product. Both sunflower and mineral oils do not help at all in reducing the protein loss from hair. This difference in results could arise from the composition of each of these oils. Coconut oil, being a triglyceride of lauric acid (principal fatty acid), has a high affinity for hair proteins and, because of its low molecular weight and straight linear chain, is able to penetrate inside the hair shaft. Mineral oil, being a hydrocarbon, has no affinity for proteins and therefore is not able to penetrate and yield better results. In the case of sunflower oil, although it is a triglyceride of linoleic acid, because of its bulky structure due to the presence of double bonds, it does not penetrate the fiber, consequently resulting in no favorable impact on protein loss.",
"title": ""
}
] |
scidocsrr
|
8d0adaf8a7dbc0c6df1cf178dbd2ef79
|
CMUcam3: An Open Programmable Embedded Vision Sensor
|
[
{
"docid": "f267030a7ff5a8b4b87b9b5418ec3c28",
"text": "Vision systems employing region segmentation by color are crucial in real-time mobile robot applications, such as RoboCup[1], or other domains where interaction with humans or a dynamic world is required. Traditionally, systems employing real-time color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency, fast color image segmentation can be accomplished using commodity image capture and CPU hardware. Our paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of four main parts; a novel implementation of a threshold classifier, a merging system to form regions through connected components, a separation and sorting system that gathers various region features, and a top down merging heuristic to approximate perceptual grouping. A key to the efficiency of our approach is a new method for accomplishing color space thresholding that enables a pixel to be classified into one or more of up to 32 colors using only two logical AND operations. A naive approach could require up to 192 comparisons for the same classification. The algorithms and representations are described, as well as descriptions of three applications in which it has been used.",
"title": ""
}
] |
[
{
"docid": "0c5f30cd0e072309b13cc6c43bb12647",
"text": "In this paper, we compare the performance of different approaches to predicting delays in air traffic networks. We consider three classes of models: A recently-developed aggregate model of the delay network dynamics, which we will refer to as the Markov Jump Linear System (MJLS), classical machine learning techniques like Classification and Regression Trees (CART), and three candidate Artificial Neural Network (ANN) architectures. We show that prediction performance can vary significantly depending on the choice of model/algorithm, and the type of prediction (for example, classification vs. regression). We also discuss the importance of selecting the right predictor variables, or features, in order to improve the performance of these algorithms. The models are evaluated using operational data from the National Airspace System (NAS) of the United States. The ANN is shown to be a good algorithm for the classification problem, where it attains an average accuracy of nearly 94% in predicting whether or not delays on the 100 most-delayed links will exceed 60 min, looking two hours into the future. The MJLS model, however, is better at predicting the actual delay levels on different links, and has a mean prediction error of 4.7 min for the regression problem, for a 2 hr horizon. MJLS is also better at predicting outbound delays at the 30 major airports, with a mean error of 6.8 min, for a 2 hr prediction horizon. The effect of temporal factors, and the spatial distribution of current delays, in predicting future delays are also compared. The MJLS model, which is specifically designed to capture aggregate air traffic dynamics, leverages on these factors and outperforms the ANN in predicting the future spatial distribution of delays. In this manner, a tradeoff between model simplicity and prediction accuracy is revealed. Keywordsdelay prediction; network delays; machine learning; artificial neural networks; data mining",
"title": ""
},
{
"docid": "99b1c2f0b3e3deb86ce25d2368a8dd86",
"text": "We provide concrete evidence that floating-point computations in C programs can be verified in a homogeneous verification setting based on Coq only, by evaluating the practicality of the combination of the formal semantics of CompCert Clight and the Flocq formal specification of IEEE 754 floating-point arithmetic for the verification of properties of floating-point computations in C programs. To this end, we develop a framework to automatically compute real-number expressions of C floating-point computations with rounding error terms along with their correctness proofs. We apply our framework to the complete analysis of an energy-efficient C implementation of a radar image processing algorithm, for which we provide a certified bound on the total noise introduced by floating-point rounding errors and energy-efficient approximations of square root and sine.",
"title": ""
},
{
"docid": "7c98ac06ea8cb9b83673a9c300fb6f4c",
"text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.",
"title": ""
},
{
"docid": "259c17740acd554463731d3e1e2912eb",
"text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.",
"title": ""
},
{
"docid": "6c47ae47e95641f10bd3b1a0a9b0dbb6",
"text": "Type 2 diabetes mellitus and impaired glucose tolerance are associated with antipsychotic treatment. Risk factors for type 2 diabetes and impaired glucose tolerance include abdominal adiposity, age, ethnic status, and certain neuropsychiatric conditions. While impaired glucose metabolism was first described in psychotic patients prior to the introduction of antipsychotic medications, treatment with antipsychotic medications is associated with impaired glucose metabolism, exacerbation of existing type 1 and 2 diabetes, new-onset type 2 diabetes mellitus, and diabetic ketoacidosis, a severe and potentially fatal metabolic complication. The strength of the association between antipsychotics and diabetes varies across individual medications, with the largest number of reports for chlorpromazine, clozapine, and olanzapine. Recent controlled studies suggest that antipsychotics can impair glucose regulation by decreasing insulin action, although effects on insulin secretion are not ruled out. Antipsychotic medications induce weight gain, and the potential for weight gain varies across individual agents with larger effects observed again for agents like chlorpromazine, clozapine, and olanzapine. Increased abdominal adiposity may explain some treatment-related changes in glucose metabolism. However, case reports and recent controlled studies suggest that clozapine and olanzapine treatment may also be associated with adverse effects on glucose metabolism independent of adiposity. Dyslipidemia is a feature of type 2 diabetes, and antipsychotics such as clozapine and olanzapine have also been associated with hypertriglyceridemia, with agents such as haloperidol, risperidone, and ziprasidone associated with reductions in plasma triglycerides. Diabetes mellitus is associated with increased morbidity and mortality due to both acute (e.g., diabetic ketoacidosis) and long-term (e.g., cardiovascular disease) complications. A progressive relationship between plasma glucose levels and cardiovascular risk (e.g., myocardial infarction, stroke) begins at glucose levels that are well below diabetic or \"impaired\" thresholds. Increased adiposity and dyslipidemia are additional, independent risk factors for cardiovascular morbidity and mortality. Patients with schizophrenia suffer increased mortality due to cardiovascular disease, with presumed contributions from a number of modifiable risk factors (e.g., smoking, sedentary lifestyle, poor diet, obesity, hyperglycemia, and dyslipidemia). Patients taking antipsychotic medications should undergo regular monitoring of weight and plasma glucose and lipid levels, so that clinicians can individualize treatment decisions and reduce iatrogenic contributions to morbidity and mortality.",
"title": ""
},
{
"docid": "543099ac1bb00e14f4fc757a25d9487c",
"text": "With the development of personalized services, collaborative filtering techniques have been successfully applied to the network recommendation system. But sparse data seriously affect the performance of collaborative filtering algorithms. To alleviate the impact of data sparseness, using user interest information, an improved user-based clustering Collaborative Filtering (CF) algorithm is proposed in this paper, which improves the algorithm by two ways: user similarity calculating method and user-item rating matrix extended. The experimental results show that the algorithm could describe the user similarity more accurately and alleviate the impact of data sparseness in collaborative filtering algorithm. Also the results show that it can improve the accuracy of the collaborative recommendation algorithm.",
"title": ""
},
{
"docid": "5c898e311680199f1f369d3c264b2b14",
"text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.",
"title": ""
},
{
"docid": "20db149230db9df2a30f5cd788db1d89",
"text": "IP flows have heavy-tailed packet and byte size distributions. This make them poor candidates for uniform sampling---i.e. selecting 1 in N flows---since omission or inclusion of a large flow can have a large effect on estimated total traffic. Flows selected in this manner are thus unsuitable for use in usage sensitive billing. We propose instead using a size-dependent sampling scheme which gives priority to the larger contributions to customer usage. This turns the heavy tails to our advantage; we can obtain accurate estimates of customer usage from a relatively small number of important samples.The sampling scheme allows us to control error when charging is sensitive to estimated usage only above a given base level. A refinement allows us to strictly limit the chance that a customers estimated usage will exceed their actual usage. Furthermore, we show that a secondary goal, that of controlling the rate at which samples are produced, can be fulfilled provided the billing cycle is sufficiently long. All these claims are supported by experiments on flow traces gathered from a commercial network.",
"title": ""
},
{
"docid": "4243f0bafe669ab862aaad2b184c6a0e",
"text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.",
"title": ""
},
{
"docid": "ee351931c35e5dd1ebe7d528568df394",
"text": "We present an automatic method for fitting multiple B-spline curves to unorganized planar points. The method works on point clouds which have complicated topological structures and a single curve is insufficient for fitting the shape. A divide-and-merge algorithm is developed for dividing the unorganized data points into several groups while each group represents a smooth curve. Each point group is then fitted with a B-spline curve by the SDM method. Our algorithm also sets up automatically the control polygon of initial B-spline curves. Experiments demonstrate the capability of the presented algorithm in accurate reconstruction of topological structures of point clouds.",
"title": ""
},
{
"docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b",
"text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …",
"title": ""
},
{
"docid": "dd867c3f55696bebea3d9049a3d43163",
"text": "This paper examines the task of detecting intensity of emotion from text. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities. We use a technique called best–worst scaling (BWS) that improves annotation consistency and obtains reliable fine-grained scores. We show that emotion-word hashtags often impact emotion intensity, usually conveying a more intense emotion. Finally, we create a benchmark regression system and conduct experiments to determine: which features are useful for detecting emotion intensity; and, the extent to which two emotions are similar in terms of how they manifest in language.",
"title": ""
},
{
"docid": "79ea2c1566b3bb1e27fe715b1a1a385b",
"text": "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries.",
"title": ""
},
{
"docid": "b3bda9c0a0ec22c5d244f8c538ab6056",
"text": "Knowledge assets represent a special set of resources for a firm and as such, their management is of great importance to academics and managers. The purpose of this paper is to review the literature as it pertains to knowledge assets and provide a suggested model for intellectual capital management that can be of benefit to both academics and practitioners. In doing so, a set of research propositions are suggested to provide guidance for future research.",
"title": ""
},
{
"docid": "da9751e8db176942da1c582908942ce3",
"text": "This paper introduces new types of square-piece jigsaw puzzles: those for which the orientation of each jigsaw piece is unknown. We propose a tree-based reassembly that greedily merges components while respecting the geometric constraints of the puzzle problem. The algorithm has state-of-the-art performance for puzzle assembly, whether or not the orientation of the pieces is known. Our algorithm makes fewer assumptions than past work, and success is shown even when pieces from multiple puzzles are mixed together. For solving puzzles where jigsaw piece location is known but orientation is unknown, we propose a pairwise MRF where each node represents a jigsaw piece's orientation. Other contributions of the paper include an improved measure (MGC) for quantifying the compatibility of potential jigsaw piece matches based on expecting smoothness in gradient distributions across boundaries.",
"title": ""
},
{
"docid": "2a45cf0fcf67ca51db59317663d874b9",
"text": "Anoctamin 1 (ANO1), a calcium-activated chloride channel, is highly amplified in prostate cancer, the most common form of cancer and leading causes of cancer death in men, and downregulation of ANO1 expression or its functional activity is known to inhibit cell proliferation, migration and invasion in prostate cancer cells. Here, we performed a cell-based screening for the identification of ANO1 inhibitors as potential anticancer therapeutic agents for prostate cancer. Screening of ~300 selected bioactive natural products revealed that luteolin is a novel potent inhibitor of ANO1. Electrophysiological studies indicated that luteolin potently inhibited ANO1 chloride channel activity in a dose-dependent manner with an IC50 value of 9.8 μM and luteolin did not alter intracellular calcium signaling in PC-3 prostate cancer cells. Luteolin inhibited cell proliferation and migration of PC-3 cells expressing high levels of ANO1 more potently than that of ANO1-deficient PC-3 cells. Notably, luteolin not only inhibited ANO1 channel activity, but also strongly decreased protein expression levels of ANO1. Our results suggest that downregulation of ANO1 by luteolin is a potential mechanism for the anticancer effect of luteolin.",
"title": ""
},
{
"docid": "fd531eeed23d5cdde6d6751b37569474",
"text": "Paraphrases play an important role in the variety and complexity of natural language documents. However they adds to the difficulty of natural language processing. Here we describe a procedure for obtaining paraphrases from news article. A set of paraphrases can be useful for various kinds of applications. Articles derived from different newspapers can contain paraphrases if they report the same event of the same day. We exploit this feature by using Named Entity recognition. Our basic approach is based on the assumption that Named Entities are preserved across paraphrases. We applied our method to articles of two domains and obtained notable examples. Although this is our initial attempt to automatically extracting paraphrases from a corpus, the results are promising.",
"title": ""
},
{
"docid": "b40b97410d0cd086118f0980d0f52867",
"text": "In smart cities, commuters have the opportunities for smart routing that may enable selecting a route with less car accidents, or one that is more scenic, or perhaps a straight and flat route. Such smart personalization requires a data management framework that goes beyond a static road network graph. This paper introduces PreGo, a novel system developed to provide real time personalized routing. The recommended routes by PreGo are smart and personalized in the sense of being (1) adjustable to individual users preferences, (2) subjective to the trip start time, and (3) sensitive to changes of the road conditions. Extensive experimental evaluation using real and synthetic data demonstrates the efficiency of the PreGo system.",
"title": ""
},
{
"docid": "cb8dbf14b79edd2a3ee045ad08230a30",
"text": "Observational data suggest a link between menaquinone (MK, vitamin K2) intake and cardiovascular (CV) health. However, MK intervention trials with vascular endpoints are lacking. We investigated long-term effects of MK-7 (180 µg MenaQ7/day) supplementation on arterial stiffness in a double-blind, placebo-controlled trial. Healthy postmenopausal women (n=244) received either placebo (n=124) or MK-7 (n=120) for three years. Indices of local carotid stiffness (intima-media thickness IMT, Diameter end-diastole and Distension) were measured by echotracking. Regional aortic stiffness (carotid-femoral and carotid-radial Pulse Wave Velocity, cfPWV and crPWV, respectively) was measured using mechanotransducers. Circulating desphospho-uncarboxylated matrix Gla-protein (dp-ucMGP) as well as acute phase markers Interleukin-6 (IL-6), high-sensitive C-reactive protein (hsCRP), tumour necrosis factor-α (TNF-α) and markers for endothelial dysfunction Vascular Cell Adhesion Molecule (VCAM), E-selectin, and Advanced Glycation Endproducts (AGEs) were measured. At baseline dp-ucMGP was associated with IMT, Diameter, cfPWV and with the mean z-scores of acute phase markers (APMscore) and of markers for endothelial dysfunction (EDFscore). After three year MK-7 supplementation cfPWV and the Stiffness Index βsignificantly decreased in the total group, whereas distension, compliance, distensibility, Young's Modulus, and the local carotid PWV (cPWV) improved in women having a baseline Stiffness Index β above the median of 10.8. MK-7 decreased dp-ucMGP by 50 % compared to placebo, but did not influence the markers for acute phase and endothelial dysfunction. In conclusion, long-term use of MK-7 supplements improves arterial stiffness in healthy postmenopausal women, especially in women having a high arterial stiffness.",
"title": ""
}
] |
scidocsrr
|
e8aa3724c0874026b8a2e1e6b929e8e0
|
The Structure and Performance of Efficient Interpreters
|
[
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
}
] |
[
{
"docid": "d35515299b37b5eb936986d33aca66e1",
"text": "This paper describes an Ada framework called Cheddar which provides tools to check if a real time application meets its temporal constraints. The framework is based on the real time scheduling theory and is mostly written for educational purposes. With Cheddar, an application is defined by a set of processors, tasks, buffers, shared resources and messages. Cheddar provides feasibility tests in the cases of monoprocessor, multiprocessor and distributed systems. It also provides a flexible simulation engine which allows the designer to describe and run simulations of specific systems. The framework is open and has been designed to be easily connected to CASE tools such as editors, design tools, simulators, ...",
"title": ""
},
{
"docid": "65c9ce95eb92ad4be2caf4b4a6a0bdd8",
"text": "The electricity industry is now at the verge of a new era-an era that promises, through the evolution of the existing electrical grids to smart grids, more efficient and effective power management, better reliability, reduced production costs, and more environmentally friendly energy generation. Numerous initiatives across the globe, led by both industry and academia, reflect the mounting interest around not only the enormous benefits but also the great risks introduced by this evolution. This paper focuses on issues related to the security of the smart grid and the smart home, which we present as an integral part of the smart grid. Based on several scenarios, we aim to present some of the most representative threats to the smart home/smart grid environment. The threats detected are categorized according to specific security goals set for the smart home/smart grid environment, and their impact on the overall system security is evaluated. A review of contemporary literature is then conducted with the aim of presenting promising security countermeasures with respect to the identified specific security goals for each presented scenario. An effort to shed light on open issues and future research directions concludes this paper.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "5611107338100a2d202f7dbde5fd39ac",
"text": "This experiment investigated the ability of rats with dorsal striatal or fornix damage to learn the location of a visible platform in a water maze. We also assessed the animals' ability to find the platform when it was hidden (submerged). Rats with neurotoxic damage to the dorsal striatum acquired both the visible and hidden platform versions of the task, but when required to choose between the spatial location they had learned and the visible platform in a new location they swam first to the old spatial location. Rats with radio-frequency damage to the fornix acquired the visible platform version of the water maze task but failed to learn about the platform's location in space. When the visible platform was moved to a new location they swam directly to it. Normal rats acquired both the visible and hidden platform versions of the task. These findings suggest that in the absence of a functional neural system that includes dorsal striatum, spatial information predominantly controlled behavior even in the presence of a cue that the animals had previously been reinforced for approaching. In the absence of a functional hippocampal system behavior was not affected by spatial information and responding to local reinforced cues was enhanced. The results support the idea that different neural substrates in the mammalian nervous system acquire different types of information simultaneously and in parallel.",
"title": ""
},
{
"docid": "39bf990d140eb98fa7597de1b6165d49",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "fb6494dcf01a927597ff784a3323e8c2",
"text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.",
"title": ""
},
{
"docid": "aa5d8162801abcc81ac542f7f2a423e5",
"text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).",
"title": ""
},
{
"docid": "0d11074054a2921c90d028c54010193b",
"text": "Aggressively scaling the supply voltage of SRAMs greatly minimizes their active and leakage power, a dominating portion of the total power in modern ICs. Hence, energy constrained applications, where performance requirements are secondary, benefit significantly from an SRAM that offers read and write functionality at the lowest possible voltage. However, bit-cells and architectures achieving very high density conventionally fail to operate at low voltages. This paper describes a high density SRAM in 65 nm CMOS that uses an 8T bit-cell to achieve a minimum operating voltage of 350 mV. Buffered read is used to ensure read stability, and peripheral control of both the bit-cell supply voltage and the read-buffer's foot voltage enable sub-T4 write and read without degrading the bit-cell's density. The plaguing area-offset tradeoff in modern sense-amplifiers is alleviated using redundancy, which reduces read errors by a factor of five compared to device up-sizing. At its lowest operating voltage, the entire 256 kb SRAM consumes 2.2 muW in leakage power.",
"title": ""
},
{
"docid": "300028d1aa1eda913737c1e7ba6b61f7",
"text": "We consider the task of detecting regulatory elements in the human genome directly from raw DNA. Past work has focused on small snippets of DNA, making it difficult to model long-distance dependencies that arise from DNA’s 3-dimensional conformation. In order to study long-distance dependencies, we develop and release a novel dataset for a larger-context modeling task. Using this new data set we model long-distance interactions using dilated convolutional neural networks, and compare them to standard convolutions and recurrent neural networks. We show that dilated convolutions are effective at modeling the locations of regulatory markers in the human genome, such as transcription factor binding sites, histone modifications, and DNAse hypersensitivity sites.",
"title": ""
},
{
"docid": "cacf4a2d7004bccecb0e8965de695e69",
"text": "The WebNLG challenge consists in mapping sets of RDF triples to text. It provides a common benchmark on which to train, evaluate and compare “microplanners”, i.e. generation systems that verbalise a given content by making a range of complex interacting choices including referring expression generation, aggregation, lexicalisation, surface realisation and sentence segmentation. In this paper, we introduce the microplanning task, describe data preparation, introduce our evaluation methodology, analyse participant results and provide a brief description of the participating systems.",
"title": ""
},
{
"docid": "861f76c061b9eb52ed5033bdeb9a3ce5",
"text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas",
"title": ""
},
{
"docid": "403dc89a0b74e68dda095dde756d44f0",
"text": "The prefrontal cortex subserves executive control--that is, the ability to select actions or thoughts in relation to internal goals. Here, we propose a theory that draws upon concepts from information theory to describe the architecture of executive control in the lateral prefrontal cortex. Supported by evidence from brain imaging in human subjects, the model proposes that action selection is guided by hierarchically ordered control signals, processed in a network of brain regions organized along the anterior-posterior axis of the lateral prefrontal cortex. The theory clarifies how executive control can operate as a unitary function, despite the requirement that information be integrated across multiple distinct, functionally specialized prefrontal regions.",
"title": ""
},
{
"docid": "b44d6d71650fc31c643ac00bd45772cd",
"text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.",
"title": ""
},
{
"docid": "b15bb888a11444f614b4e45317550830",
"text": "Transactional Memory (TM) is emerging as a promising technology to simplify parallel programming. While several TM systems have been proposed in the research literature, we are still missing the tools and workloads necessary to analyze and compare the proposals. Most TM systems have been evaluated using microbenchmarks, which may not be representative of any real-world behavior, or individual applications, which do not stress a wide range of execution scenarios. We introduce the Stanford Transactional Application for Multi-Processing (STAMP), a comprehensive benchmark suite for evaluating TM systems. STAMP includes eight applications and thirty variants of input parameters and data sets in order to represent several application domains and cover a wide range of transactional execution cases (frequent or rare use of transactions, large or small transactions, high or low contention, etc.). Moreover, STAMP is portable across many types of TM systems, including hardware, software, and hybrid systems. In this paper, we provide descriptions and a detailed characterization of the applications in STAMP. We also use the suite to evaluate six different TM systems, identify their shortcomings, and motivate further research on their performance characteristics.",
"title": ""
},
{
"docid": "74fcade8e5f5f93f3ffa27c4d9130b9f",
"text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"title": ""
},
{
"docid": "1b812ef6c607790a0dbcf5e050871fc2",
"text": "This paper introduces Adaptive Music for Affect Improvement (AMAI), a music generation and playback system whose goal is to steer the listener towards a state of more positive affect. AMAI utilizes techniques from game music in order to adjust elements of the music being heard; such adjustments are made adaptively in response to the valence levels of the listener as measured via facial expression and emotion detection. A user study involving AMAI was conducted, with N=19 participants across three groups, one for each strategy of Discharge, Diversion, and Discharge→ Diversion. Significant differences in valence levels between music-related stages of the study were found between the three groups, with Discharge → Diversion exhibiting the greatest increase in valence, followed by Diversion and finally Discharge. Significant differences in positive affect between groups were also found in one before-music and after-music pair of self-reported affect surveys, with Discharge→ Diversion exhibiting the greatest decrease in positive affect, followed by Diversion and finally Discharge; the resulting differences in facial expression valence and self-reported affect offer contrasting con-",
"title": ""
},
{
"docid": "9292f1925de5d6df9eb89b2157842e5c",
"text": "According to Breast Cancer Institute (BCI), Breast Cancer is one of the most dangerous type of diseases that is very effective for women in the world. As per clinical expert detecting this cancer in its first stage helps in saving lives. As per cancer.net offers individualized guides for more than 120 types of cancer and related hereditary syndromes. For detecting breast cancer mostly machine learning techniques are used. In this paper we proposed adaptive ensemble voting method for diagnosed breast cancer using Wisconsin Breast Cancer database. The aim of this work is to compare and explain how ANN and logistic algorithm provide better solution when its work with ensemble machine learning algorithms for diagnosing breast cancer even the variables are reduced. In this paper we used the Wisconsin Diagnosis Breast Cancer dataset. When compared to related work from the literature. It is shown that the ANN approach with logistic algorithm is achieved 98.50% accuracy from another machine learning algorithm.",
"title": ""
},
{
"docid": "5fc8afbe7d55af3274d849d1576d3b13",
"text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.",
"title": ""
},
{
"docid": "0d20f5ae084c6ca4e7a834e1eee1e84c",
"text": "Gantry-tilted helical multi-slice computed tomography (CT) refers to the helical scanning CT system equipped with multi-row detector operating at some gantry tilting angle. Its purpose is to avoid the area which is vulnerable to the X-ray radiation. The local tomography is to reduce the total radiation dose by only scanning the region of interest for image reconstruction. In this paper we consider the scanning scheme, and incorporate the local tomography technique with the gantry-tilted helical multi-slice CT. The image degradation problem caused by gantry tilting is studied, and a new error correction method is proposed to deal with this problem in the local CT. Computer simulation shows that the proposed method can enhance the local imaging performance in terms of image sharpness and artifacts reduction",
"title": ""
},
{
"docid": "a2eee3cd0e8ee3e97af54f11b8a29fc9",
"text": "Internet Service Providers (ISPs) are responsible for transmitting and delivering their customers’ data requests, ranging from requests for data from websites, to that from filesharing applications, to that from participants in Voice over Internet Protocol (VoIP) chat sessions. Using contemporary packet inspection and capture technologies, ISPs can investigate and record the content of unencrypted digital communications data packets. This paper explains the structure of these packets, and then proceeds to describe the packet inspection technologies that monitor their movement and extract information from the packets as they flow across ISP networks. After discussing the potency of contemporary deep packet inspection devices, in relation to their earlier packet inspection predecessors, and their potential uses in improving network operators’ network management systems, I argue that they should be identified as surveillance technologies that can potentially be incredibly invasive. Drawing on Canadian examples, I argue that Canadian ISPs are using DPI technologies to implicitly ‘teach’ their customers norms about what are ‘inappropriate’ data transfer programs, and the appropriate levels of ISP manipulation of customer data traffic. Version 1.2 :: January 10, 2008. * Doctoral student in the University of Victoria’s Political Science department. Thanks to Colin Bennett, Andrew Clement, Fenwick Mckelvey and Joyce Parsons for comments.",
"title": ""
}
] |
scidocsrr
|
aa85b1638f6a254dc347eb93235d03a1
|
A General Method for Amortizing Variational Filtering
|
[
{
"docid": "66e1fadc4811a0bf9e75e21d014fbe5a",
"text": "Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications, and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book’s practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.",
"title": ""
},
{
"docid": "1a65a6e22d57bb9cd15ba01943eeaa25",
"text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients",
"title": ""
}
] |
[
{
"docid": "c7ed28199d7a8ea4f35ccb26ea9530c1",
"text": "In this paper, we study the problem of author identification in big scholarly data, which is to effectively rank potential authors for each anonymous paper by using historical data. Most of the existing deanonymization approaches predict relevance score of paper-author pair via feature engineering, which is not only time and storage consuming, but also introduces irrelevant and redundant features or miss important attributes. Representation learning can automate the feature generation process by learning node embeddings in academic network to infer the correlation of paper-author pair. However, the learned embeddings are often for general purpose (independent of the specific task), or based on network structure only (without considering the node content). To address these issues and make a further progress in solving the author identification problem, we propose Camel, a content-aware and meta-path augmented metric learning model. Specifically, first, the directly correlated paper-author pairs are modeled based on distance metric learning by introducing a push loss function. Next, the paper content embedding encoded by the gated recurrent neural network is integrated into the distance loss. Moreover, the historical bibliographic data of papers is utilized to construct an academic heterogeneous network, wherein a meta-path guided walk integrative learning module based on the task-dependent and content-aware Skipgram model is designed to formulate the correlations between each paper and its indirect author neighbors, and further augments the model. Extensive experiments demonstrate that Camel outperforms the state-of-the-art baselines. It achieves an average improvement of 6.3% over the best baseline method.",
"title": ""
},
{
"docid": "9891cd761ca163395972d10624ddf6e4",
"text": "In this work, we introduce a Hierarchical Generative Model (HGM) to enable realistic forward eye image synthesis, as well as effective backward eye gaze estimation. The proposed HGM consists of a hierarchical generative shape model (HGSM), and a conditional bidirectional generative adversarial network (c-BiGAN). The HGSM encodes eye geometry knowledge and relates eye gaze with eye shape, while c-BiGAN leverages on big data and captures the dependency between eye shape and eye appearance. As an intermediate component, eye shape connects knowledge-based model (HGSM) with data-driven model (c-BiGAN) and enables bidirectional inference. Through a top-down inference, the HGM can synthesize eye images consistent with the given eye gaze. Through a bottom-up inference, HGM can infer eye gaze effectively from a given eye image. Qualitative and quantitative evaluations on benchmark datasets demonstrate our model's effectiveness on both eye image synthesis and eye gaze estimation. In addition, the proposed model is not restricted to eye images only. It can be adapted to face images and any shape-appearance related fields.",
"title": ""
},
{
"docid": "c9f4af65710813850c7c5438368fc07c",
"text": "Due to the complex system context of embedded-software applications, defects can cause life-threatening situations, delays can create huge costs, and insufficient productivity can impact entire economies. Providing better estimates, setting objectives, and identifying critical hot spots in embedded-software engineering requires adequate benchmarking data.",
"title": ""
},
{
"docid": "dee2b99fd5ae1d48c8e8b29047aa97ce",
"text": "Nonlinear time series analysis techniques have been proposed to detect changes in the electroencephalography dynamics prior to epileptic seizures. Their applicability in practice to predict seizure onsets is hampered by the present lack of generally accepted standards to assess their performance. We propose an analytic approach to judge the prediction performance of multivariate seizure prediction methods. Statistical tests are introduced to assess patient individual results, taking into account that prediction methods are applied to multiple time series and several seizures. Their performance is illustrated utilizing a bivariate seizure prediction method based on synchronization theory.",
"title": ""
},
{
"docid": "d16ec1f4c32267a07b1453d45bc8a6f2",
"text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.",
"title": ""
},
{
"docid": "ce858818f7575684a6f3479c3124fffd",
"text": "Most object detection systems consist of three stages. First, a set of individual hypotheses for object locations is generated using a proposal generating algorithm. Second, a classifier scores every generated hypothesis independently to obtain a multi-class prediction. Finally, all scored hypotheses are filtered via a non-differentiable and decoupled non-maximum suppression (NMS) post-processing step. In this paper, we propose a filtering network (FNet), a method which replaces NMS with a differentiable neural network that allows joint reasoning and rescoring of the generated set of hypotheses per image. This formulation enables end-to-end training of the full object detection pipeline. First, we demonstrate that FNet, a feed-forward network architecture, is able to mimic NMS decisions, despite the sequential nature of NMS. We further analyze NMS failures and propose a loss formulation that is better aligned with the mean average precision (mAP) evaluation metric. We evaluate FNet on several standard detection datasets. Results surpass standard NMS on highly occluded settings of a synthetic overlapping MNIST dataset and show competitive behavior on PascalVOC2007 and KITTI detection benchmarks.",
"title": ""
},
{
"docid": "23d2349831a364e6b77e3c263a8321c8",
"text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …",
"title": ""
},
{
"docid": "ad49595bd04c3285be2939e4ced77551",
"text": "Embedded systems have found a very strong foothold in global Information Technology (IT) market since they can provide very specialized and intricate functionality to a wide range of products. On the other hand, the migration of IT functionality to a plethora of new smart devices (like mobile phones, cars, aviation, game or households machines) has enabled the collection of a considerable number of data that can be characterized sensitive. Therefore, there is a need for protecting that data through IT security means. However, eare usually dployed in hostile environments where they can be easily subject of physical attacks. In this paper, we provide an overview from ES hardware perspective of methods and mechanisms for providing strong security and trust. The various categories of physical attacks on security related embedded systems are presented along with countermeasures to thwart them and the importance of reconfigurable logic flexibility, adaptability and scalability along with trust protection mechanisms is highlighted. We adopt those mechanisms in order to propose a FPGA based embedded system hardware architecture capable of providing security and trust along with physical attack protection using trust zone separation. The benefits of such approach are discussed and a subsystem of the proposed architecture is implemented in FPGA technology as a proof of concept case study. From the performed analysis and implementation, it is concluded that flexibility, security and trust are fully realistic options for embedded system security enhancement. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1acbb63a43218d216a2e850d9b3d3fa1",
"text": "In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner.",
"title": ""
},
{
"docid": "10e88f0d1a339c424f7e0b8fa5b43c1e",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "73c2874b381e49f9c36ae0b43d7e73fb",
"text": "Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets show that the proposed framework can reliably detect abnormalities in the video sequence, outperforming the current state-of-the-art methods.",
"title": ""
},
{
"docid": "8d4bdc3e5e84a63a76e6a226a9f0e558",
"text": "HTTP cookies are the de facto mechanism for session authentication in Web applications. However, their inherent security weaknesses allow attacks against the integrity of Web sessions. HTTPS is often recommended to protect cookies, but deploying full HTTPS support can be challenging due to performance and financial concerns, especially for highly distributed applications. Moreover, cookies can be exposed in a variety of ways even when HTTPS is enabled. In this article, we propose one-time cookies (OTC), a more robust alternative for session authentication. OTC prevents attacks such as session hijacking by signing each user request with a session secret securely stored in the browser. Unlike other proposed solutions, OTC does not require expensive state synchronization in the Web application, making it easily deployable in highly distributed systems. We implemented OTC as a plug-in for the popular WordPress platform and as an extension for Firefox and Firefox for mobile browsers. Our extensive experimental analysis shows that OTC introduces a latency of less than 6 ms when compared to cookies—a negligible overhead for most Web applications. Moreover, we show that OTC can be combined with HTTPS to effectively add another layer of security to Web applications. In so doing, we demonstrate that one-time cookies can significantly improve the security of Web applications with minimal impact on performance and scalability.",
"title": ""
},
{
"docid": "0528bc602b9a48e30fbce70da345c0ee",
"text": "The power system is a dynamic system and it is constantly being subjected to disturbances. It is important that these disturbances do not drive the system to unstable conditions. For this purpose, additional signal derived from deviation, excitation deviation and accelerating power are injected into voltage regulators. The device to provide these signals is referred as power system stabilizer. The use of power system stabilizer has become very common in operation of large electric power systems. The conventional PSS which uses lead-lag compensation, where gain setting designed for specific operating conditions, is giving poor performance under different loading conditions. Therefore, it is very difficult to design a stabilizer that could present good performance in all operating points of electric power systems. In an attempt to cover a wide range of operating conditions, Fuzzy logic control has been suggested as a possible solution to overcome this problem, thereby using linguist information and avoiding a complex system mathematical model, while giving good performance under different operating conditions.",
"title": ""
},
{
"docid": "3f5e8ac89e893d3166f5e3c50f91b8cc",
"text": "Biosequences typically have a small alphabet, a long length, and patterns containing gaps (i.e., \"don't care\") of arbitrary size. Mining frequent patterns in such sequences faces a different type of explosion than in transaction sequences primarily motivated in market-basket analysis. In this paper, we study how this explosion affects the classic sequential pattern mining, and present a scalable two-phase algorithm to deal with this new explosion. The <i>Segment Phase</i> first searches for short patterns containing no gaps, called <i>segments</i>. This phase is efficient. The <i>Pattern Phase</i> searches for long patterns containing multiple segments separated by variable length gaps. This phase is time consuming. The purpose of two phases is to exploit the information obtained from the first phase to speed up the pattern growth and matching and to prune the search space in the second phase. We evaluate this approach on synthetic and real life data sets.",
"title": ""
},
{
"docid": "7d0fb12fce0ef052684a8664a3f5c543",
"text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.",
"title": ""
},
{
"docid": "a8b7ee0a870843eabe0c21478fa2cc7b",
"text": "Calcium channel blockers (CCBs) were developed as vasodilators, and their use in cardiovascular disease treatment remains largely based on that mechanism of action. More recently, with the evolution of second- and third-generation CCBs, pleiotropic effects have been observed, and at least some of CCBs' benefit is attributable to these mechanisms. Understanding these effects has contributed greatly to elucidating disease mechanisms and the rationale for CCB use. Furthermore, this knowledge might clarify why drugs are useful in some disease states, such as atherosclerosis, but not in others, such as heart failure. Although numerous drugs used in the treatment of vascular disease, including statins and angiotensin-converting-enzyme inhibitors, have well-described pleiotropic effects universally accepted to contribute to their benefit, little attention has been paid to CCBs' potentially similar effects. Accumulating evidence that at least 1 CCB, amlodipine, has pharmacologic actions distinct from L-type calcium channel blockade prompted us to investigate the pleiotropic actions of amlodipine and CCBs in general. There are several areas of research; foci here are (1) the physicochemical properties of amlodipine and its interaction with cholesterol and oxidants; (2) the mechanism by which amlodipine regulates NO production and implications; and (3) amlodipine's role in controlling smooth muscle cell proliferation and matrix formation.",
"title": ""
},
{
"docid": "ed08e93061f2d248f6b70fde6e17b431",
"text": "With the rapid growth of e-commerce, the B2C of e-commerce has been a significant issue. The purpose of this study aims to predict consumers’ purchase intentions by integrating trust and perceived risk into the model to empirically examine the impact of key variables. 705 samples were obtained from online users purchasing from e-vendor of Yahoo! Kimo. This study applied the Structural Equation Model to examine consumers’ online shopping based on the Technology Acceptance Model (TAM). The results indicate that perceived ease of use (PEOU), perceived usefulness (PU), trust, and perceived risk significantly impact purchase intentions both directly and indirectly. Moreover, trust significantly reduced online consumer perceived risk during online shopping. This study provides evidence of the relationship between consumers’ purchase intention, perceived trust and perceived risk to websites of specific e-vendors. Such knowledge may help to inform promotion, designing, and advertising website strategies employed by practitioners.",
"title": ""
},
{
"docid": "84a22b5539293887781db072a10d4a64",
"text": "Multimodal sentiment analysis is the analysis of emotions, attitude, and opinion from audiovisual format. A company can improve the quality of its product and services by analyzing the reviews about the product [5]. Sentiment analysis is widely used in managing customer relations. There are many textual reviews from which we cannot extract emotions by traditional sentiment analysis techniques. Some sentences in the textual reviews may derive deep emotions but do not contain any keyword to detect those emotions, so we used audiovisual reviews in order to detect emotions from the facial expressions of the customer. In this paper we take audiovisual input and extract emotions from video and audio in parallel from audiovisual input, finally classify the overall review as positive, negative or neutral based on the emotions detected.",
"title": ""
},
{
"docid": "6f8e441738a0c045a83f0e1efd4e0bbd",
"text": "Irony and humour are just two of many forms of figurative language. Approaches to identify in vast volumes of data such as the internet humorous or ironic statements is important not only from a theoretical view point but also for their potential applicability in social networks or human-computer interactive systems. In this study we investigate the automatic detection of irony and humour in social networks such as Twitter casting it as a classification problem. We propose a rich set of features for text interpretation and representation to train classification procedures. In cross-domain classification experiments our model achieves and improves state-of-the-art",
"title": ""
}
] |
scidocsrr
|
df5e7d89192b964d3e7905f2b59aac31
|
CCM and DCM Operation of the Interleaved Two-Phase Boost Converter With Discrete and Coupled Inductors
|
[
{
"docid": "78b07bce8817c60dce98ad434d1fc3e0",
"text": "Boost converters are widely used as power-factorcorrected preregulators. In high-power applications, interleaved operation of two or more boost converters has been proposed to increase the output power and to reduce the output ripple. A major design criterion then is to ensure equal current sharing among the parallel converters. In this paper, a converter consisting of two interleaved and intercoupled boost converter cells is proposed and investigated. The boost converter cells have very good current sharing characteristics even in the presence of relatively large duty cycle mismatch. In addition, it can be designed to have small input current ripple and zero boost-rectifier reverse-recovery loss. The operating principle, steady-state analysis, and comparison with the conventional boost converter are presented. Simulation and experimental results are also given.",
"title": ""
},
{
"docid": "6420f394cb02e9415b574720a9c64e7f",
"text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.",
"title": ""
}
] |
[
{
"docid": "0c67628fb24c8cbd4a8e49fb30ba625e",
"text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.",
"title": ""
},
{
"docid": "ea4a1405e1c6444726d1854c7c56a30d",
"text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: christoph.roesmann@tu-dortmund.de (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016",
"title": ""
},
{
"docid": "7535a7351849c5a6dd65611037d06678",
"text": "In this paper, we present an optimistic concurrency control solution. The proposed solution represents an excellent blossom in the concurrency control field. It deals with the concurrency control anomalies, and, simultaneously, assures the reliability of the data before read-write transactions and after successfully committed. It can be used within the distributed database to track data logs and roll back processes to overcome distributed database anomalies. The method is based on commit timestamps for validation and an integer flag that is incremented each time a successful update on the record is committed.",
"title": ""
},
{
"docid": "75f916790044fab6e267c5c5ec5846b7",
"text": "Detecting circles from a digital image is very important in shape recognition. In this paper, an efficient randomized algorithm (RCD) for detecting circles is presented, which is not based on the Hough transform (HT). Instead of using an accumulator for saving the information of the related parameters in the HT-based methods, the proposed RCD does not need an accumulator. The main concept used in the proposed RCD is that we first randomly select four edge pixels in the image and define a distance criterion to determine whether there is a possible circle in the image; after finding a possible circle, we apply an evidence-collecting process to further determine whether the possible circle is a true circle or not. Some synthetic images with different levels of noises and some realistic images containing circular objects with some occluded circles and missing edges have been taken to test the performance. Experimental results demonstrate that the proposed RCD is faster than other HT-based methods for the noise level between the light level and the modest level. For a heavy noise level, the randomized HT could be faster than the proposed RCD, but at the expense of massive memory requirements.c © 2001 Academic Press",
"title": ""
},
{
"docid": "7ed1fe4218a708a3ee4baf67b5f8bea2",
"text": "A business process is the combination of a set of activities within an enterprise with a structure describing their logical order and dependence whose objective is to produce a desired result. Business process modelling enables a common understanding and analysis of a business process. A process model can provide a comprehensive understanding of a process. An enterprise can be analysed and integrated through its business processes. Hence the importance of correctly modelling its business processes. Using the right model involves taking into account the purpose of the analysis and, knowledge of the available process modelling techniques and tools. The number of references on business modelling is huge, thus making it very time consuming to get an overview and understand many of the concepts and vocabulary involved. The primary concern of this paper is to make that job easier, i.e. review business process modelling literature and describe the main process modelling techniques. Also a framework for classifying business process-modelling techniques according to their purpose is proposed and discussed. r 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cdcd2a627b1d7d94adc1bfa831667cf7",
"text": "Solving mazes is not just a fun pastime: They are prototype models in several areas of science and technology. However, when maze complexity increases, their solution becomes cumbersome and very time consuming. Here, we show that a network of memristors--resistors with memory--can solve such a nontrivial problem quite easily. In particular, maze solving by the network of memristors occurs in a massively parallel fashion since all memristors in the network participate simultaneously in the calculation. The result of the calculation is then recorded into the memristors' states and can be used and/or recovered at a later time. Furthermore, the network of memristors finds all possible solutions in multiple-solution mazes and sorts out the solution paths according to their length. Our results demonstrate not only the application of memristive networks to the field of massively parallel computing, but also an algorithm to solve mazes, which could find applications in different fields.",
"title": ""
},
{
"docid": "c1ee9109435a6535e1512669b632e490",
"text": "The theory of structural holes suggests that individuals would benefit from filling the \"holes\" (called as structural hole spanners) between people or groups that are otherwise disconnected. A few empirical studies have verified that structural hole spanners play a key role in the information diffusion. However, there is still lack of a principled methodology to detect structural hole spanners from a given social network.\n In this work, we precisely define the problem of mining top-k structural hole spanners in large-scale social networks and provide an objective (quality) function to formalize the problem. Two instantiation models have been developed to implement the objective function. For the first model, we present an exact algorithm to solve it and prove its convergence. As for the second model, the optimization is proved to be NP-hard, and we design an efficient algorithm with provable approximation guarantees.\n We test the proposed models on three different networks: Coauthor, Twitter, and Inventor. Our study provides evidence for the theory of structural holes, e.g., 1% of Twitter users who span structural holes control 25% of the information diffusion on Twitter. We compare the proposed models with several alternative methods and the results show that our models clearly outperform the comparison methods. Our experiments also demonstrate that the detected structural hole spanners can help other social network applications, such as community kernel detection and link prediction. To the best of our knowledge, this is the first attempt to address the problem of mining structural hole spanners in large social networks.",
"title": ""
},
{
"docid": "3dd238bc2b51b3aaf9b8b6900fc82d12",
"text": "Nowadays many applications are generating streaming data for an example real-time surveillance, internet traffic, sensor data, health monitoring systems, communication networks, online transactions in the financial market and so on. Data Streams are temporally ordered, fast changing, massive, and potentially infinite sequence of data. Data Stream mining is a very challenging problem. This is due to the fact that data streams are of tremendous volume and flows at very high speed which makes it impossible to store and scan streaming data multiple time. Concept evolution in streaming data further magnifies the challenge of working with streaming data. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building classification model. In this paper we will focus on the challenges and necessary features of data stream clustering techniques, review and compare the literature for data stream clustering by example and variable, describe some real world applications of data stream clustering, and tools for data stream clustering.",
"title": ""
},
{
"docid": "08f3e3a76808c546ed761a24fb10561c",
"text": "We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i.e. linear dynamical systems modelling the target sequences. We start by giving a closed form solution for the definition of the optimal weights of a linear autoencoder given a training set of sequences. This solution, however, is computationally very demanding, so we suggest a procedure to get an approximate solution for a given number of hidden units. The weights obtained for the linear autoencoder are then used as initial weights for the inputto-hidden connections of a recurrent neural network, which is then trained on the desired task. Using four well known datasets of sequences of polyphonic music, we show that the proposed pre-training approach is highly effective, since it allows to largely improve the state of the art results on all the considered datasets.",
"title": ""
},
{
"docid": "2ead8dda09a272942657787371dbd768",
"text": "Some billiard tables in R2 contain crucial references to dynamical systems but can be analyzed with Euclidean geometry. In this expository paper, we will analyze billiard trajectories in circles, circular rings, and ellipses as well as relate their charactersitics to ergodic theory and dynamical systems.",
"title": ""
},
{
"docid": "f02b44ff478952f1958ba33d8a488b8e",
"text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.",
"title": ""
},
{
"docid": "15a32d88604b2894b9a6f323907fac1d",
"text": "We examined closely the cerebellar circuit model that we have proposed previously. The model granular layer generates a finite but very long sequence of active neuron populations without recurrence, which is able to represent the passage of time. For all the possible binary patterns fed into mossy fibres, the circuit generates the same number of different sequences of active neuron populations. Model Purkinje cells that receive parallel fiber inputs from neurons in the granular layer learn to stop eliciting spikes at the timing instructed by the arrival of signals from the inferior olive. These functional roles of the granular layer and Purkinje cells are regarded as a liquid state generator and readout neurons, respectively. Thus, the cerebellum that has been considered to date as a biological counterpart of a perceptron is reinterpreted to be a liquid state machine that possesses powerful information processing capability more than a perceptron.",
"title": ""
},
{
"docid": "fb053dbc02ea256743696ea5546d7729",
"text": "PURPOSE\nThis study introduces an updated Three-Column Concept for the classification and treatment of complex tibial plateau fractures. A combined preoperative assessment of fracture morphology and injury mechanism is utilized to determine surgical approach, implant placement and fixation sequence. The effectiveness of this updated concept is demonstrated through evaluation of both clinical and radiographic outcome measures.\n\n\nPATIENTS AND METHODS\nFrom 2008 to 2012, 355 tibial plateau fractures were treated using the updated Three-Column Concept. Standard radiographic and computed tomography imaging are used to systematically assess and classify fracture patterns as follows: (1) identify column(s) injured and locate associated articular depression or comminution, (2) determine injury mechanism including varus/valgus and flexion/extension forces, and (3) determine surgical approach(es) as well as the location and function of applied fixation. Quality and maintenance of reduction and alignment, fracture healing, complications, and functional outcomes were assessed.\n\n\nRESULTS\n287 treated fractures were followed up for a mean period of 44.5 months (range: 22-96). The mean time to radiographic bony union and full weight-bearing was 13.5 weeks (range: 10-28) and 14.8 weeks (range: 10-26) respectively. The average functional Knee Society Score was 93.0 (range: 80-95). The average range of motion of the affected knees was 1.5-121.5°. No significant difference was found in knee alignment between immediate and 18-month post-operative measurements. Additionally, no significant difference was found in functional scores and range of motion between one, two and three-column fracture groups. Twelve patients suffered superficial infection, one had limited skin necrosis and two had wound dehiscence, that healed with nonoperative management. Intraoperative vascular injury occurred in two patients. Fixation of failure was not observed in any of the fractures treated.\n\n\nCONCLUSION\nAn updated Three-Column Concept assessing fracture morphology and injury mechanism in tandem can be used to guide surgical treatment of tibial plateau fractures. Limited results demonstrate successful application of biologically friendly fixation constructs while avoiding fixation failure and associated complications of both simple and complex tibial plateau fractures.\n\n\nLEVEL OF EVIDENCE\nLevel II, prospective cohort study.",
"title": ""
},
{
"docid": "583b8cda1ef421011f7801bc35b82b8b",
"text": "This paper presents a natural language processing based automated system for NL text to OO modeling the user requirements and generating code in multi-languages. A new rule-based model is presented for analyzing the natural languages (NL) and extracting the relative and required information from the given software requirement notes by the user. User writes the requirements in simple English in a few paragraphs and the designed system incorporates NLP methods to analyze the given script. First the NL text is semantically analyzed to extract classes, objects and their respective, attributes, methods and associations. Then UML diagrams are generated on the bases of previously extracted information. The designed system also provides with the respective code automatically of the already generated diagrams. The designed system provides a quick and reliable way to generate UML diagrams to save the time and budget of both the user and system analyst.",
"title": ""
},
{
"docid": "94488dafad4441028a91d5802ec6e121",
"text": "Vulvovaginal atrophy is a common condition associated with decreased estrogenization of the vaginal tissue. Symptoms include vaginal dryness, irritation, itching, soreness, burning, dyspareunia, discharge, urinary frequency, and urgency. It can occur at any time in a woman's life cycle, although more commonly in the postmenopausal phase, during which the prevalence is approximately 50%. Despite the high prevalence and the substantial effect on quality of life, vulvovaginal atrophy often remains underreported and undertreated. This article aims to review the physiology, clinical presentation, assessment, and current recommendations for treatment, including aspects of effectiveness and safety of local vaginal estrogen therapies.",
"title": ""
},
{
"docid": "e6d5f3c9a58afcceae99ff522d6dfa81",
"text": "Strategic information systems planning (SISP) is a key concern facing top business and information systems executives. Observers have suggested that both too little and too much SISP can prove ineffective. Hypotheses examine the expected relationship between comprehensiveness and effectiveness in five SISP planning phases. They predict a nonlinear, inverted-U relationship thus suggesting the existence of an optimal level of comprehensiveness. A survey collected data from 161 US information systems executives. After an extensive validation of the constructs, the statistical analysis supported the hypothesis in a Strategy Implementation Planning phase, but not in terms of the other four SISP phases. Managers may benefit from the knowledge that both too much and too little implementation planning may hinder SISP success. Future researchers should investigate why the hypothesis was supported for that phase, but not the others. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e349ca11637dfad2d68a5082e27f11ff",
"text": "As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.",
"title": ""
},
{
"docid": "87878562478c3188b3f0e3e1b99e08b8",
"text": "This paper introduces a simple method to improve the radiation pattern of the low profile magneto-electric (ME) dipole antenna by adding a substrate integrated waveguide (SIW) side-walls structure around. Compared with the original ME dipole antenna, gain enhancement of about 3dB on average is achieved without deteriorating the impedance bandwidth. The antenna operates at 15GHz with 63.3% -10dB impedance bandwidth from 10.8GHz to 18.4GHz and the gain is 12.3dBi at 17GHz on a substrate with fixed thickness of 3mm (0.15λ0) and aperture of 35mm×35mm (1.75λ0). This antenna is a good choice in the wireless communication application for its advantages of low-profile, wide bandwidth, high gain and low cost fabrication.",
"title": ""
},
{
"docid": "50d397416652309e2c371aaeb53dc1da",
"text": "In conventional energy storage systems using series-connected energy storage cells such as lithium-ion battery cells and supercapacitors (SCs), an interface bidirectional converter and cell voltage equalizer are separately required to manage charging/discharging and ensure years of safe operation. In this paper, a bidirectional PWM converter integrating cell voltage equalizer is proposed. This proposed integrated converter can be derived by combining a traditional bidirectional PWM converter and series-resonant voltage multiplier (SRVM) that functionally operates as an equalizer and is driven by asymmetric square wave voltage generated at the switching node of the converter. The converter and equalizer can be integrated into a single unit without increasing the switch count, achieving not only system-level but also circuit-level simplifications. Open-loop control is feasible for the SRVM when operated in discontinuous conduction mode, meaning the proposed integrated converter can operate similarly to conventional bidirectional converters. An experimental charge-discharge cycling test for six SCs connected in series was performed using the proposed integrated converter. The cell voltage imbalance was gradually eliminated by the SRVM while series-connected SCs were cycled by the bidirectional converter. All the cell voltages were eventually unified, demonstrating the integrated functions of the proposed converter.",
"title": ""
}
] |
scidocsrr
|
b748fe59c5c5bf6b49e0b8cb9adf2f0b
|
Enforcing Least Privilege Memory Views for Multithreaded Applications
|
[
{
"docid": "ca8aba51ab75cb86a32b6913ed9690cc",
"text": "Capsicum is a lightweight operating system capability and sandbox framework planned for inclusion in FreeBSD 9. Capsicum extends, rather than replaces, UNIX APIs, providing new kernel primitives (sandboxed capability mode and capabilities) and a userspace sandbox API. These tools support the compartmentalization of monolithic UNIX applications into logical applications. We demonstrate our approach by adapting core FreeBSD utilities and Google’s Chromium web browser to use Capsicum primitives, and compare the complexity and robustness of Capsicum with other sandboxing techniques.",
"title": ""
}
] |
[
{
"docid": "3a80168bda1d5d92a5d767117581806a",
"text": "During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.",
"title": ""
},
{
"docid": "bf33724b6be926dd7c46c929b635d31d",
"text": "Biogenic amines are compounds commonly present in living organisms in which they are responsible for many essential functions. They can be naturally present in many foods such as fruits and vegetables, meat, fish, chocolate and milk, but they can also be produced in high amounts by microorganisms through the activity of amino acid decarboxylases. Excessive consumption of these amines can be of health concern because their not equilibrate assumption in human organism can generate different degrees of diseases determined by their action on nervous, gastric and intestinal systems and blood pressure. High microbial counts, which characterise fermented foods, often unavoidably lead to considerable accumulation of biogenic amines, especially tyramine, 2-phenylethylamine, tryptamine, cadaverine, putrescine and histamine. However, great fluctuations of amine content are reported in the same type of product. These differences depend on many variables: the quali-quantitative composition of microbial microflora, the chemico-physical variables, the hygienic procedure adopted during production, and the availability of precursors. Dry fermented sausages are worldwide diffused fermented meat products that can be a source of biogenic amines. Even in the absence of specific rules and regulations regarding the presence of these compounds in sausages and other fermented products, an increasing attention is given to biogenic amines, especially in relation to the higher number of consumers with enhanced sensitivity to biogenic amines determined by the inhibition of the action of amino oxidases, the enzymes involved in the detoxification of these substances. The aim of this paper is to give an overview on the presence of these compounds in dry fermented sausages and to discuss the most important factors influencing their accumulation. These include process and implicit factors as well as the role of starter and nonstarter microflora growing in the different steps of sausage production. Moreover, the role of microorganisms with amino oxidase activity as starter cultures to control or reduce the accumulation of biogenic amines during ripening and storage of sausages is discussed.",
"title": ""
},
{
"docid": "26e60be4012b20575f3ddee16f046daa",
"text": "Natural scene character recognition is challenging due to the cluttered background, which is hard to separate from text. In this paper, we propose a novel method for robust scene character recognition. Specifically, we first use robust principal component analysis (PCA) to denoise character image by recovering the missing low-rank component and filtering out the sparse noise term, and then use a simple Histogram of oriented Gradient (HOG) to perform image feature extraction, and finally, use a sparse representation based classifier for recognition. In experiments on four public datasets, namely the Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT) dataset and IIIT5K-word dataset, our method was demonstrated to be competitive with the state-of-the-art methods.",
"title": ""
},
{
"docid": "8d4007b4d769c2d90ae07b5fdaee8688",
"text": "In this project, we implement the semi-supervised Recursive Autoencoders (RAE), and achieve the result comparable with result in [1] on the Movie Review Polarity dataset1. We achieve 76.08% accuracy, which is slightly lower than [1] ’s result 76.8%, with less vector length. Experiments show that the model can learn sentiment and build reasonable structure from sentence.We find longer word vector and adjustment of words’ meaning vector is beneficial, while normalization of transfer function brings some improvement. We also find normalization of the input word vector may be beneficial for training.",
"title": ""
},
{
"docid": "1403e5ee76253ebf7e58300bf9f4dc8a",
"text": "PURPOSE\nTo evaluate the marginal fit of CAD/CAM copings milled from hybrid ceramic (Vita Enamic) blocks and lithium disilicate (IPS e.max CAD) blocks, and to evaluate the effect of crystallization firing on the marginal fit of lithium disilicate copings.\n\n\nMATERIALS AND METHODS\nA standardized metal die with a 1-mm-wide shoulder finish line was imaged using the CEREC AC Bluecam. The coping was designed using CEREC 3 software. The design was used to fabricate 15 lithium disilicate and 15 hybrid ceramic copings. Design and milling were accomplished by one operator. The copings were seated on the metal die using a pressure clamp with a uniform pressure of 5.5 lbs. A Macroview Microscope (14×) was used for direct viewing of the marginal gap. Four areas were imaged on each coping (buccal, distal, lingual, mesial). Image analysis software was used to measure the marginal gaps in μm at 15 randomly selected points on each of the four surfaces. A total of 60 measurements were made per specimen. For lithium disilicate copings the measurements for marginal gap were made before and after crystallization firing. Data were analyzed using paired t-test and Kruskal-Wallis test.\n\n\nRESULTS\nThe overall mean difference in marginal gap between the hybrid ceramic and crystallized lithium disilicate copings was statistically significant (p < 0.01). Greater mean marginal gaps were measured for crystallized lithium disilicate copings. The overall mean difference in marginal gap before and after firing (precrystallized and crystallized lithium disilicate copings) showed an average of 62 μm increase in marginal gap after firing. This difference was also significant (p < 0.01).\n\n\nCONCLUSIONS\nA significant difference exists in the marginal gap discrepancy when comparing hybrid ceramic and lithium disilicate CAD/CAM crowns. Also crystallization firing can result in a significant increase in the marginal gap of lithium disilicate CAD/CAM crowns.",
"title": ""
},
{
"docid": "88f75662fb1eaa04f23f4647d772caeb",
"text": "Recently, a new Web development technique for creating interactive Web applications, dubbed Ajax, has emerged. In this new model, the single-page Web interface is composed of individual components which can be updated/replaced independently. If until a year ago, the concern revolved around migrating legacy systems to Web-based settings, today we have a new challenge of migrating Web applications to single-page Ajax applications. Gaining an understanding of the navigational model and user interface structure of the source application is the first step in the migration process. In this paper, we explore how reverse engineering techniques can help analyze classic Web applications for this purpose. Our approach, using a schema-based clustering technique, extracts a navigational model of Web applications, and identifies candidate user interface components to be migrated to a single-page Ajax interface. Additionally, results of a case study, conducted to evaluate our tool, are presented",
"title": ""
},
{
"docid": "9b917dde9a9f9dcf8ed74fd0bb3a07cf",
"text": "We describe an ELECTRONIC SPEAKING GLOVE, designed to facilitate an easy communication through synthesized speech for the benefit of speechless patients. Generally, a speechless person communicates through sign language which is not understood by the majority of people. This final year project is designed to solve this problem. Gestures of fingers of a user of this glove will be converted into synthesized speech to convey an audible message to others, for example in a critical communication with doctors. The glove is internally equipped with multiple flex sensors that are made up of “bend-sensitive resistance elements”. For each specific gesture, internal flex sensors produce a proportional change in resistance of various elements. The processing of this information sends a unique set of signals to the AVR (Advance Virtual RISC) microcontroller which is preprogrammed to speak desired sentences.",
"title": ""
},
{
"docid": "93ec9adabca7fac208a68d277040c254",
"text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\ngeorg.summer@gmail.com.",
"title": ""
},
{
"docid": "badaac9fe5a7ea032a6aba5696036274",
"text": "As continuation of our last research about dancing humanoid robot in 33 degree of freedom, in this paper we describe how to build dance pattern planning of Indonesia traditional dance movement in complete degree of freedom. We discuss the primitive pose in that commonly happened in traditional dance and build the transition pattern among the pose. The motion pattern method between the poses is based on the ability of robot to reach zero moment point position, and also the system to synchronize timing for dance motion is built. In this research zero moment point is our most concern problem because the worst thing in humanoid research is when the robot cannot maintain the balance itself. The computation considers that the zero moment point stays in a support polygon area.",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "b06679e91a8d68b8535054e36c333a82",
"text": "With its design concept of cross-platform portability, OpenCL can be used not only on GPUs (for which it is quite popular), but also on CPUs. Whether porting GPU programs to CPUs, or simply writing new code for CPUs, using OpenCL brings up the performance issue, usually raised in one of two forms: \"OpenCL is not performance portable!\" or \"Why using OpenCL for CPUs after all?!\". We argue that both issues can be addressed by a thorough study of the factors that impact the performance of OpenCL on CPUs. This analysis is the focus of this paper. Specifically, starting from the two main architectural mismatches between many-core CPUs and the OpenCL platform-parallelism granularity and the memory model-we identify eight such performance \"traps\" that lead to performance degradation in OpenCL for CPUs. Using multiple code examples, from both synthetic and real-life benchmarks, we quantify the impact of these traps, showing how avoiding them can give up to 10 times better performance. Furthermore, we point out that the solutions we provide for avoiding these traps are simple and generic code transformations, which can be easily adopted by either programmers or automated tools. Therefore, we conclude that a certain degree of OpenCL inter-platform performance portability, while indeed not a given, can be achieved by simple and generic code transformations.",
"title": ""
},
{
"docid": "ecbd9201a7f8094a02fcec2c4f78240d",
"text": "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-ofthe-art results compared to other compression strategies.",
"title": ""
},
{
"docid": "822b3d69fd4c55f45a30ff866c78c2b1",
"text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.",
"title": ""
},
{
"docid": "aa362363d6e4b48f7d0b50b02f35a8a2",
"text": "In this paper, we mainly adopt the voting combination method to implement the incremental learning for SVM. The incremental learning algorithm proposed by this paper has contained two parts in order to tackle different types of incremental learning cases, the first part is to deal with the on-line incremental learning and the second part is to deal with the batch incremental learning. In the final, we make the experiment to verify the validity and efficiency of such algorithm.",
"title": ""
},
{
"docid": "4816f5155450af9c95ac6910aad7379c",
"text": "In this paper, a novel high step-up converter is proposed for fuel-cell system applications. As an illustration, a two-phase version configuration is given for demonstration. First, an interleaved structure is adapted for reducing input and output ripples. Then, a C¿uk-type converter is integrated to the first phase to achieve a much higher voltage conversion ratio and avoid operating at extreme duty ratio. In addition, additional capacitors are added as voltage dividers for the two phases for reducing the voltage stress of active switches and diodes, which enables one to adopt lower voltage rating devices to further reduce both switching and conduction losses. Furthermore, the corresponding model is also derived, and analysis of the steady-state characteristic is made to show the merits of the proposed converter. Finally, a 200-W rating prototype system is also constructed to verify the effectiveness of the proposed converter. It is seen that an efficiency of 93.3% can be achieved when the output power is 150-W and the output voltage is 200-V with 0.56 duty ratio.",
"title": ""
},
{
"docid": "83ad3f9cce21b2f4c4f8993a3d418a44",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
},
{
"docid": "cb130a706cf66f92a1918c58a87847ed",
"text": "Single component organic photodetectors capable of broadband light sensing represent a paradigm shift for designing flexible and inexpensive optoelectronic devices. The present study demonstrates the application of a new quadrupolar 1,4-dihydropyrrolo[3,2-b]pyrrole derivative with spectral sensitivity across 350-830 nm as a potential broadband organic photodetector (OPD) material. The amphoteric redox characteristics evinced from the electrochemical studies are exploited to conceptualize a single component OPD with ITO and Al as active electrodes. The photodiode showed impressive broadband photoresponse to monochromatic light sources of 365, 470, 525, 589, 623, and 830 nm. Current density-voltage (J-V) and transient photoresponse studies showed stable and reproducible performance under continuous on/off modulations. The devices operating in reverse bias at 6 V displayed broad spectral responsivity (R) and very good detectivity (D*) peaking a maximum 0.9 mA W-1 and 1.9 × 1010 Jones (at 623 nm and 500 μW cm-2) with a fast rise and decay times of 75 and 140 ms, respectively. Low dark current densities ranging from 1.8 × 10-10 Acm-2 at 1 V to 7.2 × 10-9 A cm-2 at 6 V renders an operating range to amplify the photocurrent signal, spectral responsivity, and detectivity. Interestingly, the fabricated OPDs display a self-operational mode which is rarely reported for single component organic systems.",
"title": ""
},
{
"docid": "36d0776ad44592db640bd205acee8e39",
"text": "1. A review of the literature shows that in nearly all cases tropical rain forest fragmentation has led to a local loss of species. Isolated fragments suffer eductions in species richness with time after excision from continuous forest, and small fragments often have fewer species recorded for the same effort of observation than large fragments orareas of continuous forest. 2. Birds have been the most frequently studied taxonomic group with respect o the effects of tropical forest fragmentation. 3. The mechanisms of fragmentation-related extinction i clude the deleterious effects of human disturbance during and after deforestation, the reduction of population sizes, the reduction of immigration rates, forest edge effects, changes in community structure (secondand higher-order effects) and the immigration fexotic species. 4. The relative importance of these mechanisms remains obscure. 5. Animals that are large, sparsely or patchily distributed, orvery specialized and intolerant of the vegetation surrounding fragments, are particularly prone to local extinction. 6. The large number of indigenous pecies that are very sparsely distributed and intolerant of conditions outside the forest make evergreen tropical rain forest particularly susceptible to species loss through fragmentation. 7. Much more research is needed to study what is probably the major threat o global biodiversity.",
"title": ""
},
{
"docid": "3db6fc042a82319935bf5dd0d1491e89",
"text": "We present a piezoelectric-on-silicon Lorentz force magnetometer (LFM) based on a mechanically coupled array of clamped–clamped beam resonators for the detection of lateral ( $xy$ plane) magnetic fields with an extended operating bandwidth of 1.36 kHz. The proposed device exploits piezoelectric transduction to greatly enhance the electromechanical coupling efficiency, which benefits the device sensitivity. Coupling multiple clamped–clamped beams increases the area for piezoelectric transduction, which further increases the sensitivity. The reported device has the widest operating bandwidth among LFMs reported to date with comparable normalized sensitivity despite the quality factor being limited to 30 when operating at ambient pressure instead of vacuum as in most cases of existing LFMs.",
"title": ""
},
{
"docid": "8fc8764c505e3e2d0707256247600bd2",
"text": "The task of cross-modal retrieval, i.e., using a text query to search for images or vice versa, has received considerable attentionwith the rapid growth of multi-modal web data. Modeling the correlations between different modalities is the key to tackle this problem. In this paper, we propose a correspondence restricted Boltzmann machine (Corr-RBM) to map the original features of bimodal data, such as image and text in our setting, into a low-dimensional common space, in which the heterogeneous data are comparable. In our Corr-RBM, two RBMs built for image and text, respectively are connected at their individual hidden representation layers by a correlation loss function. A single objective function is constructed to trade off the correlation loss and likelihoods of both modalities. Through the optimization of this objective function, our Corr-RBM is able to capture the correlations between two modalities and learn the representation of each modality simultaneously. Furthermore, we construct two deep neural structures using Corr-RBM as the main building block for the task of cross-modal retrieval. A number of comparison experiments are performed on three public real-world data sets. All of our models show significantly better results than state-of-the-art models in both searching images via text query and vice versa. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
3f16f3f3039fe786d7a84d333a3c22cf
|
Automated ontology construction for unstructured text documents
|
[
{
"docid": "69f3a41f7250377b2d99aa61249db37e",
"text": "In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.",
"title": ""
},
{
"docid": "dc3c81411cecd9f3d9cca0d88f52a04d",
"text": "Technology in the field of digital media generates huge amounts of textual information. The potential for exchange and retrieval of information is vast and daunting. The key problem in achieving efficient and user-friendly retrieval is the development of a search mechanism to guarantee delivery of minimal irrelevant information (high precision) while insuring relevant information is not overlooked (high recall). The traditional solution employs keyword-based search. The only documents retrieved are those containing user specified keywords. But many documents convey desired semantic information without containing these keywords. One can overcome this problem by indexing documents according to meanings rather than words, although this will entail a way of converting words to meanings and the creation of ontologies. We have solved the problem of an index structure through the design and implementation of a concept-based model using domain-dependent ontologies. Ontology is a collection of concepts and their interrelationships, which provide an abstract view of an application domain. We propose a new mechanism that can generate ontologies automatically in order to make our approach scalable. For this we modify the existing self-organizing tree algorithm (SOTA) that constructs a hierarchy. Furthermore, in order to find an appropriate concept for each node in the hierarchy we propose an automatic concept selection algorithm from WordNet, a linguistic ontology. To illustrate the effectiveness of our automatic ontology construction method, we have explored our ontology construction in text documents. The Reuters21578 text document corpus has been used. We have observed that our modified SOTA outperforms hierarchical agglomerative clustering (HAC).",
"title": ""
}
] |
[
{
"docid": "36e531c34dd8f714f481c6ab9ed1a375",
"text": "Generating informative responses in end-toend neural dialogue systems attracts a lot of attention in recent years. Various previous work leverages external knowledge and the dialogue contexts to generate such responses. Nevertheless, few has demonstrated their capability on incorporating the appropriate knowledge in response generation. Motivated by this, we propose a novel open-domain conversation generation model in this paper, which employs the posterior knowledge distribution to guide knowledge selection, therefore generating more appropriate and informative responses in conversations. To the best of our knowledge, we are the first one who utilize the posterior knowledge distribution to facilitate conversation generation. Our experiments on both automatic and human evaluation clearly verify the superior performance of our model over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "9813df16b1852cf6d843ff3e1c67fa88",
"text": "Traumatic neuromas are tumors resulting from hyperplasia of axons and nerve sheath cells after section or injury to the nervous tissue. We present a case of this tumor, confirmed by anatomopathological examination, in a male patient with history of circumcision. Knowledge of this entity is very important in achieving the differential diagnosis with other lesions that affect the genital area such as condyloma acuminata, bowenoid papulosis, lichen nitidus, sebaceous gland hyperplasia, achrochordon and pearly penile papules.",
"title": ""
},
{
"docid": "2701f46ac9a473cb809773df5ae1d612",
"text": "Testing and measuring the security of software system architectures is a difficult task. An attempt is made in this paper to analyze the issues of architecture security of object-oriented software’s using common security concepts to evaluate the security of a system under design. Object oriented systems are based on various architectures like COM, DCOM, CORBA, MVC and Broker. In object oriented technology the basic system component is an object. Individual system component is posing it own risk in the system. Security policies and the associated risk in these software architectures can be calculated for the individual component. Overall risk can be calculated based on the context and risk factors in the architecture. Small risk factors get accumulated together and form a major risk in the systems and can damage the systems.",
"title": ""
},
{
"docid": "03f913234dc6d41aada7ce3fe8de1203",
"text": "Epicanthoplasty is commonly performed on Asian eyelids. Consequently, overcorrection may appear. The aim of this study was to introduce a method of reconstructing the epicanthal fold and to apply this method to the patients. A V flap with an extension (eagle beak shaped) was designed on the medial canthal area. The upper incision line started near the medial end of the double-fold line, and it followed its curvature inferomedially. For the lower incision, starting at the tip (medial end) of the flap, a curvilinear incision was designed first diagonally and then horizontally along the lower blepharoplasty line. The V flap was elevated as thin as possible. Then, the upper flap was deeply undermined to make it thick. The lower flap was made a little thinner than the upper flap. Then, the upper and lower flaps were approximated to form the anteromedial surface of the epicanthal fold in a fashion sufficient to cover the red caruncle. The V flap was rotated inferolaterally over the caruncle. The tip of the V flap was sutured to the medial one-third point of the lower margin. The inferior border of the V flap and the residual lower margin were approximated. Thereafter, the posterolateral surface of the epicanthal fold was made. From 1999 to 2011, 246 patients were operated on using this method. Among them, 62 patients were followed up. The mean intercanthal distance was increased from 31.7 to 33.8 mm postoperatively. Among the 246 patients operated on, reoperation was performed for 6 patients. Among the 6 patients reoperated on, 3 cases were due to epicanthus inversus, 1 case was due to insufficient reconstruction, 1 case was due to making an infold, and 1 case was due to reopening the epicanthal fold.This V-Y and rotation flap can be a useful method for reconstruction of the epicanthal fold.",
"title": ""
},
{
"docid": "c87112a95e41fccd9fc33bedf45e2bb5",
"text": "Smart grid introduces a wealth of promising applications for upcoming fifth-generation mobile networks (5G), enabling households and utility companies to establish a two-way digital communications dialogue, which can benefit both of them. The utility can monitor real-time consumption of end users and take proper measures (e.g., real-time pricing) to shape their consumption profile or to plan enough supply to meet the foreseen demand. On the other hand, a smart home can receive real-time electricity prices and adjust its consumption to minimize its daily electricity expenditure, while meeting the energy need and the satisfaction level of the dwellers. Smart Home applications for smart phones are also a promising use case, where users can remotely control their appliances, while they are away at work or on their ways home. Although these emerging services can evidently boost the efficiency of the market and the satisfaction of the consumers, they may also introduce new attack surfaces making the grid vulnerable to financial losses or even physical damages. In this paper, we propose an architecture to secure smart grid communications incorporating an intrusion detection system, composed of distributed components collaborating with each other to detect price integrity or load alteration attacks in different segments of an advanced metering infrastructure.",
"title": ""
},
{
"docid": "7023226f1e77729ec38eeb5158e8811d",
"text": "Combinatory Categorial Grammar (CCG) is a grammar formalism used for natural language parsing. CCG assigns structured lexical categories to words and uses a small set of combinatory rules to combine these categories in order to parse sentences. In this work we describe and implement a new approach to CCG parsing that relies on Answer Set Programming (ASP) — a declarative programming paradigm. Different from previous work, we present an encoding that is inspired by the algorithm due to Cocke, Younger, and Kasami (CYK). We also show encoding extensions for parse tree normalization and best-effort parsing and outline possible future extensions which are possible due to the usage of ASP as computational mechanism. We analyze performance of our approach on a part of the Brown corpus and discuss lessons learned during experiments with the ASP tools dlv, gringo, and clasp. The new approach is available in the open source CCG parsing toolkit AspCcgTk which uses the C&C supertagger as a preprocessor to achieve wide-coverage natural language parsing.",
"title": ""
},
{
"docid": "16832bc2740773facde956fc1b524d28",
"text": "Diffractive Optically Variable Image Devices (DOVIDs) are popular security features used to protect security documents such as banknotes, ID cards, passports, etc. Checking authenticity of these security features on both user as well as forensic level remains a challenging task, requiring sophisticated hardware tools and expert knowledge. Recently, we proposed a technique exploiting a large-scale photometric behavior of DOVIDs in order to discriminate denominations and detect counterfeits. Here we investigate invariance properties of the proposed method and demonstrate its robustness against various common perturbations, which may have negative impact on the acquisition quality in practice. Presented results show a great potential of this approach primarily for security and forensic purposes, but also for other applications, where automated inspection of DOVIDs is of interest.",
"title": ""
},
{
"docid": "a60df3040ff1e2d7ac0ef898c3d3671e",
"text": "Recommender Systems have been around for more than a decade now. Choosing what book to read next has always been a question for many. Even for students, deciding which textbook or reference book to read on a topic unknown to them is a big question. In this paper, we try to present a model for a web-based personalized hybrid book recommender system which exploits varied aspects of giving recommendations apart from the regular collaborative and content-based filtering approaches. Temporal aspects for the recommendations are incorporated. Also for users of different age, gender and country, personalized recommendations can be made on these demographic parameters. Scraping information from the web and using the information obtained from this process can be equally useful in making recommendations.",
"title": ""
},
{
"docid": "ba533a610f95d44bf5416e17b07348dd",
"text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"",
"title": ""
},
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "779d75beb7ea4967f9503d6c4d087a5d",
"text": "BACKGROUND\nTeaching is considered a highly stressful occupation. Burnout is a negative affective response occurring as a result of chronic work stress. While the early theories of burnout focused exclusively on work-related stressors, recent research adopts a more integrative approach where both environmental and individual factors are studied. Nevertheless, such studies are scarce with teacher samples.\n\n\nAIMS\nThe present cross-sectional study sought to investigate the association between burnout, personality characteristics and job stressors in primary school teachers from Cyprus. The study also investigates the relative contribution of these variables on the three facets of burnout - emotional exhaustion, depersonalization and reduced personal accomplishment.\n\n\nSAMPLE\nA representative sample of 447 primary school teachers participated in the study.\n\n\nMETHOD\nTeachers completed measures of burnout, personality and job stressors along with demographic and professional data. Surveys were delivered by courier to schools, and were distributed at faculty meetings.\n\n\nRESULTS\nResults showed that both personality and work-related stressors were associated with burnout dimensions. Neuroticism was a common predictor of all dimensions of burnout although in personal accomplishment had a different direction. Managing student misbehaviour and time constraints were found to systematically predict dimensions of burnout.\n\n\nCONCLUSIONS\nTeachers' individual characteristics as well as job related stressors should be taken into consideration when studying the burnout phenomenon. The fact that each dimension of the syndrome is predicted by different variables should not remain unnoticed especially when designing and implementing intervention programmes to reduce burnout in teachers.",
"title": ""
},
{
"docid": "289b67247b109ee0de851c0cd4e76ec3",
"text": "User engagement is a key concept in designing user-centred web applications. It refers to the quality of the user experience that emphasises the positive aspects of the interaction, and in particular the phenomena associated with being captivated by technology. This definition is motivated by the observation that successful technologies are not just used, but they are engaged with. Numerous methods have been proposed in the literature to measure engagement, however, little has been done to validate and relate these measures and so provide a firm basis for assessing the quality of the user experience. Engagement is heavily influenced, for example, by the user interface and its associated process flow, the user’s context, value system and incentives. In this paper we propose an approach to relating and developing unified measures of user engagement. Our ultimate aim is to define a framework in which user engagement can be studied, measured, and explained, leading to recommendations and guidelines for user interface and interaction design for front-end web technology. Towards this aim, in this paper, we consider how existing user engagement metrics, web analytics, information retrieval metrics, and measures from immersion in gaming can bring new perspective to defining, measuring and explaining user engagement.",
"title": ""
},
{
"docid": "4282e931ced3f8776f6c4cffb5027f61",
"text": "OBJECTIVES\nTo provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design.\n\n\nTARGET AUDIENCE\nThis tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art.\n\n\nSCOPE\nWe describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.",
"title": ""
},
{
"docid": "f4e67e19f5938f475a2757282082b695",
"text": "Classrooms are complex social systems, and student-teacher relationships and interactions are also complex, multicomponent systems. We posit that the nature and quality of relationship interactions between teachers and students are fundamental to understanding student engagement, can be assessed through standardized observation methods, and can be changed by providing teachers knowledge about developmental processes relevant for classroom interactions and personalized feedback/support about their interactive behaviors and cues. When these supports are provided to teachers’ interactions, student engagement increases. In this chapter, we focus on the theoretical and empirical links between interactions and engagement and present an approach to intervention designed to increase the quality of such interactions and, in turn, increase student engagement and, ultimately, learning and development. Recognizing general principles of development in complex systems, a theory of the classroom as a setting for development, and a theory of change specifi c to this social setting are the ultimate goals of this work. Engagement, in this context, is both an outcome in its own R. C. Pianta , Ph.D. (*) Curry School of Education , University of Virginia , PO Box 400260 , Charlottesville , VA 22904-4260 , USA e-mail: rcp4p@virginia.edu B. K. Hamre , Ph.D. Center for Advanced Study of Teaching and Learning , University of Virginia , Charlottesville , VA , USA e-mail: bkh3d@virginia.edu J. P. Allen , Ph.D. Department of Psychology , University of Virginia , Charlottesville , VA , USA e-mail: allen@virginia.edu Teacher-Student Relationships and Engagement: Conceptualizing, Measuring, and Improving the Capacity of Classroom Interactions* Robert C. Pianta , Bridget K. Hamre , and Joseph P. Allen *Preparation of this chapter was supported in part by the Wm. T. Grant Foundation, the Foundation for Child Development, and the Institute of Education Sciences. 366 R.C. Pianta et al.",
"title": ""
},
{
"docid": "ff32e960fb5ff7b7e0910e6e69421860",
"text": "Abslracl Semantic mapping aims to create maps that include meaningful features, both to robots nnd humans. We prescnt :10 extens ion to our feature based mapping technique that includes information about the locations of horizontl.lJ surfaces such as tables, shelves, or counters in the map. The surfaces a rc detected in 3D point clouds, the locations of which arc optimized by our SLAM algorithm. The resulting scans of surfaces :lrc then analyzed to segment them into distinct surfaces, which may include measurements of a single surface across multiple scans. Preliminary rl'Sults arc presented in the form of a feature based map augmented with a sct of 3D point clouds in a consistent global map frame that represent all detected surfaces within the mapped area.",
"title": ""
},
{
"docid": "de9ed927d395f78459e84b1c27f9c746",
"text": "JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.",
"title": ""
},
{
"docid": "b94d146408340ce2a89b95f1b47e91f6",
"text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "0ff483e916f4f7eda4671ba31b60d160",
"text": "Nowadays, the rapid proliferation of data makes it possible to build complex models for many real applications. Such models, however, usually require large amount of labeled data, and the labeling process can be both expensive and tedious for domain experts. To address this problem, researchers have resorted to crowdsourcing to collect labels from non-experts with much less cost. The key challenge here is how to infer the true labels from the large number of noisy labels provided by non-experts. Different from most existing work on crowdsourcing, which ignore the structure information in the labeling data provided by non-experts, in this paper, we propose a novel structured approach based on tensor augmentation and completion. It uses tensor representation for the labeled data, augments it with a ground truth layer, and explores two methods to estimate the ground truth layer via low rank tensor completion. Experimental results on 6 real data sets demonstrate the superior performance of the proposed approach over state-of-the-art techniques.",
"title": ""
},
{
"docid": "565c949a2bf8b6f6c3d246c7c195419d",
"text": "Extracorporeal photochemotherapy (ECP) is an effective treatment modality for patients with erythrodermic myocosis fungoides (MF) and Sezary syndrome (SS). During ECP, a fraction of peripheral blood mononuclear cells is collected, incubated ex-vivo with methoxypsoralen, UVA irradiated, and finally reinfused to the patient. Although the mechanism of action of ECP is not well established, clinical and laboratory observations support the hypothesis of a vaccination-like effect. ECP induces apoptosis of normal and neoplastic lymphocytes, while enhancing differentiation of monocytes towards immature dendritic cells (imDCs), followed by engulfment of apoptotic bodies. After reinfusion, imDCs undergo maturation and antigenic peptides from the neoplastic cells are expressed on the surface of DCs. Mature DCs travel to lymph nodes and activate cytotoxic T-cell clones with specificity against tumor antigens. Disease control is mediated through cytotoxic T-lymphocytes with tumor specificity. The efficacy and excellent safety profile of ECP has been shown in a large number of retrospective trials. Previous studies showed that monotherapy with ECP produces an overall response rate of approximately 60%, while clinical data support that ECP is much more effective when combined with other immune modulating agents such as interferons or retinoids, or when used as consolidation treatment after total skin electron beam irradiation. However, only a proportion of patients actually respond to ECP and parameters predictive of response need to be discovered. A patient with a high probability of response to ECP must fulfill all of the following criteria: (1) SS or erythrodermic MF, (2) presence of neoplastic cells in peripheral blood, and (3) early disease onset. Despite the fact that ECP has been established as a standard treatment modality, no prospective randomized study has been conducted so far, to the authors' knowledge. Considering the high cost of the procedure, the role of ECP in the treatment of SS/MF needs to be clarified via well designed multicenter prospective randomized trials.",
"title": ""
}
] |
scidocsrr
|
4bcb07f51e8a659243a9f7d756ebed19
|
Logic-Based Approach to Semantic Query Optimization
|
[
{
"docid": "0ab60f1192919f636325b1341528ce78",
"text": "Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logic-based and semantic transformations, fast implementations of basic operations, and combinatorial or heuristic algorithms for generating alternative access plans and choosing among them. These methods are presented in the framework of a general query evaluation procedure using the relational calculus representation of queries. In addition, nonstandard query optimization issues such as higher level query evaluation, query optimization in distributed databases, and use of database machines are addressed. The focus, however, is on query optimization in centralized database systems.",
"title": ""
}
] |
[
{
"docid": "9d234aed717e068e3ea2edc963084f0d",
"text": "The majority of financial services companies in Germany and Switzerland have, with varying objectives and success, conducted customer relationship management (CRM) implementation projects. In this paper we present a framework for the analysis of CRM approaches in financial services companies. Building on previous research and using comprehensive literature research, we develop a CRM reference architecture that focuses on the process and system level for the description and classification of CRM approaches in companies. Moreover, we analyze three CRM case studies in Swiss and German financial services companies and derive different types of CRM approaches in the financial services industry: Customer Satisfaction Management, Customer Contact Management, and Customer Profitability Management. We describe each type in accordance with the CRM architecture and a case example.",
"title": ""
},
{
"docid": "180672be0e49be493d9af3ef7b558804",
"text": "Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’’ or ‘‘determinants’’ and (2) a manipulation where one (or more) of the ‘‘factors’’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis",
"title": ""
},
{
"docid": "05f3d966a7085333169f7ee5bce30d84",
"text": "Rogelio Oliva is an assistant professor in the Technology and Operations Management Unit at the Harvard Business School. He holds a BS in industrial and systems engineering from ITESM (Mexico), an MA in systems in management from Lancaster University (UK), and a PhD in operations management and system dynamics from MIT. His current research interests include service operations, and the transition that product manufacturers are making to become service providers.",
"title": ""
},
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "b137e24f41def95c5bb4776de48804ef",
"text": "Adequate sleep is essential for general healthy functioning. This paper reviews recent research on the effects of chronic sleep restriction on neurobehavioral and physiological functioning and discusses implications for health and lifestyle. Restricting sleep below an individual's optimal time in bed (TIB) can cause a range of neurobehavioral deficits, including lapses of attention, slowed working memory, reduced cognitive throughput, depressed mood, and perseveration of thought. Neurobehavioral deficits accumulate across days of partial sleep loss to levels equivalent to those found after 1 to 3 nights of total sleep loss. Recent experiments reveal that following days of chronic restriction of sleep duration below 7 hours per night, significant daytime cognitive dysfunction accumulates to levels comparable to that found after severe acute total sleep deprivation. Additionally, individual variability in neurobehavioral responses to sleep restriction appears to be stable, suggesting a trait-like (possibly genetic) differential vulnerability or compensatory changes in the neurobiological systems involved in cognition. A causal role for reduced sleep duration in adverse health outcomes remains unclear, but laboratory studies of healthy adults subjected to sleep restriction have found adverse effects on endocrine functions, metabolic and inflammatory responses, suggesting that sleep restriction produces physiological consequences that may be unhealthy.",
"title": ""
},
{
"docid": "191b5477cd8ba0cc26a0f4a51604dc85",
"text": "In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called \" sleeping beauties \" , SBs) or have presented single publications as cases of SBs. Most recently, Ke et al. (2015) proposed the so called \" beauty coefficient \" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term \" smart girl \" (SG) is suggested to differentiate instant credit or \" flashes in the pan \" from SBs. While SG and SB are qualitatively defined, the dynamic citation angle β is introduced in this study as a simple way for identifying SGs and SBs quantitatively – complementing the beauty coefficient B. The citation angles for all articles from 1980 (n=166870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.",
"title": ""
},
{
"docid": "018ec23f619094e664fa08ce1c29849e",
"text": "Process bus networks are the next stage in the evolution of substation design, bringing digital technology to the high-voltage switchyard. Benefits of process buses include facilitating the use of nonconventional instrument transformers, improved disturbance recording and phasor measurement, and the removal of costly, and potentially hazardous, copper cabling from substation switchyards and control rooms. This paper examines the role a process bus plays in an IEC 61850-based substation automation system. Measurements taken from a process bus substation are used to develop an understanding of the network characteristics of “whole of substation” process buses. The concept of “coherent transmission” is presented, and the impact of this on Ethernet switches is examined. Experiments based on substation observations are used to investigate in detail the behavior of Ethernet switches with sampled value traffic. Test methods that can be used to assess the adequacy of a network are proposed, and examples of the application and interpretation of these tests are provided. Once sampled value frames are queued by an Ethernet switch, the additional delay incurred by subsequent switches is minimal, and this allows their use in switchyards to further reduce communications cabling, without significantly impacting operation. The performance and reliability of a process bus network operating close to the theoretical maximum number of digital sampling units (merging units or electronic instrument transformers) was investigated with networking equipment from several vendors and has been demonstrated to be acceptable.",
"title": ""
},
{
"docid": "93064713fe271a9e173d790de09f2da6",
"text": "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.",
"title": ""
},
{
"docid": "04cacc8015e3975875c11b2aa82bc5b8",
"text": "Autonomous mobile robots will soon become ubiquitous in human-populated environments. Besides their typical applications in fetching, delivery, or escorting, such robots present the opportunity to assist human users in their daily tasks by gathering and reporting up-to-date knowledge about the environment. In this paper, we explore this use case and present an end-to-end framework that enables a mobile robot to answer natural language questions about the state of a large-scale, dynamic environment asked by the inhabitants of that environment. The system parses the question and estimates an initial viewpoint that is likely to contain information for answering the question based on prior environment knowledge. Then, it autonomously navigates towards the viewpoint while dynamically adapting to changes and new information. The output of the system is an image of the most relevant part of the environment that allows the user to obtain an answer to their question. We additionally demonstrate the benefits of a continuously operating information gathering robot by showing how the system can answer retrospective questions about the past state of the world using incidentally recorded sensory data. We evaluate our approach with a custom mobile robot deployed in a university building, with questions collected from occupants of the building. We demonstrate our system's ability to respond to these questions in different environmental conditions.",
"title": ""
},
{
"docid": "2b1a9bc5ae7e9e6c2d2d008e2a2384b5",
"text": "Network information distribution is a fundamental service for any anonymization network. Even though anonymization and information distribution about the network are two orthogonal issues, the design of the distribution service has a direct impact on the anonymization. Requiring each node to know about all other nodes in the network (as in Tor and AN.ON -- the most popular anonymization networks) limits scalability and offers a playground for intersection attacks. The distributed designs existing so far fail to meet security requirements and have therefore not been accepted in real networks.\n In this paper, we combine probabilistic analysis and simulation to explore DHT-based approaches for distributing network information in anonymization networks. Based on our findings we introduce NISAN, a novel approach that tries to scalably overcome known security problems. It allows for selecting nodes uniformly at random from the full set of all available peers, while each of the nodes has only limited knowledge about the network. We show that our scheme has properties similar to a centralized directory in terms of preventing malicious nodes from biasing the path selection. This is done, however, without requiring to trust any third party. At the same time our approach provides high scalability and adequate performance. Additionally, we analyze different design choices and come up with diverse proposals depending on the attacker model. The proposed combination of security, scalability, and simplicity, to the best of our knowledge, is not available in any other existing network information distribution system.",
"title": ""
},
{
"docid": "895d5b01e984ef072b834976e0dfe378",
"text": "Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the GromovWasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.",
"title": ""
},
{
"docid": "02c2c8df7a4343d10c482025d07c4995",
"text": "taking data about a user’s likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user’s original data. I describe the method in detail and present its implementation in the LIFESTYLE FINDER agent, an internet-based experiment testing our approach on more than 20,000 users worldwide.",
"title": ""
},
{
"docid": "28e538dcdcfed7693f0c1e4fe4d29c94",
"text": "The data used in the test consisted of 500 pages selected at random from a collection of approximately 2,500 documents containing 100,000 pages. The documents in this collection were chosen by the U.S. Department of Energy (DOE) to represent the kinds of documents from which the DOE plans to build large, full-text retrieval databases using OCR for document conversion. The documents are mostly scientific and technical papers [Nartker 92].",
"title": ""
},
{
"docid": "f9ba5eccc4eafec9baee4ecd923f3764",
"text": "Automated guided vehicles (AGVs) are used as a material handling device in flexible manufacturing systems. Traditionally, AGVs were mostly used at manufacturing systems, but currently other applications of AGVs are extensively developed in other areas, such as warehouses, container terminals and transportation systems. This paper discusses literature related to different methodologies to optimize AGV systems for the two significant problems of scheduling and routing at manufacturing, distribution, transshipment and transportation systems. We categorized the methodologies into mathematical methods (exact and heuristics), simulation studies, metaheuristic techniques and artificial intelligent based approaches.",
"title": ""
},
{
"docid": "0e5a03047b07f2ef69bc16fa71e34680",
"text": "The importance of biomass as a source of chemicals, biofuels, and energy is widely accepted. Currently, the attention is mainly focused on the valorisation of by-products from lignocellulosic materials. Chemical compounds derived from plants and microorganisms that provide good protection for crops against weeds, pests, and diseases (biopesticide active substances) have been used to formulate pesticides. Their use is increasingly encouraged by new pesticide regulations that discourage the use of harmful active substances. This article reviews the current and future situation of biopesticides, especially natural chemical products, and focuses on their potential within the European pesticide legislative framework. Moreover, this article highlights the importance of the different modes/mechanisms of action of the active substances obtained from natural sources, the role of chemistry in biopesticide development, and how the adoption of integrated pest management practices contributes to a greater trend towards biopesticides.",
"title": ""
},
{
"docid": "c01bb81c729f900ee468dae62738ab09",
"text": "The success of convolutional networks in learning problems involving planar signals such as images is due to their ability to exploit the translation symmetry of the data distribution through weight sharing. Many areas of science and egineering deal with signals with other symmetries, such as rotation invariant data on the sphere. Examples include climate and weather science, astrophysics, and chemistry. In this paper we present spherical convolutional networks. These networks use convolutions on the sphere and rotation group, which results in rotational weight sharing and rotation equivariance. Using a synthetic spherical MNIST dataset, we show that spherical convolutional networks are very effective at dealing with rotationally invariant classification problems.",
"title": ""
},
{
"docid": "07f9b0c1d6a5ffae7b04dd7a5acd291d",
"text": "Cryptographic techniques have applications far beyond the obvious uses of encoding and decoding information. For Internet developers who need to know about capabilities, such as digital signatures, that depend on cryptographic techniques, theres no better overview than Applied Cryptography, the definitive book on the subject. Bruce Schneier covers general classes of cryptographic protocols and then specific techniques, detailing the inner workings of real-world cryptographic algorithms including the Data Encryption Standard and RSA public-key cryptosystems. The book includes source-code listings and extensive advice on the practical aspects of cryptography implementation, such as the importance of generating truly random numbers and of keeping keys secure.",
"title": ""
},
{
"docid": "360a2da8e6dcc35e3c68773f4278c084",
"text": "Though dialectal language is increasingly abundant on social media, few resources exist for developing NLP tools to handle such language. We conduct a case study of dialectal language in online conversational text by investigating African-American English (AAE) on Twitter. We propose a distantly supervised model to identify AAE-like language from demographics associated with geo-located messages, and we verify that this language follows well-known AAE linguistic phenomena. In addition, we analyze the quality of existing language identification and dependency parsing tools on AAE-like text, demonstrating that they perform poorly on such text compared to text associated with white speakers. We also provide an ensemble classifier for language identification which eliminates this disparity and release a new corpus of tweets containing AAE-like language. Data and software resources are available at: http://slanglab.cs.umass.edu/TwitterAAE (This is an expanded version of our EMNLP 2016 paper, including the appendix at end.)",
"title": ""
},
{
"docid": "f82a49434548e1aa09792877d84b296c",
"text": "Rats and mice have a tendency to interact more with a novel object than with a familiar object. This tendency has been used by behavioral pharmacologists and neuroscientists to study learning and memory. A popular protocol for such research is the object-recognition task. Animals are first placed in an apparatus and allowed to explore an object. After a prescribed interval, the animal is returned to the apparatus, which now contains the familiar object and a novel object. Object recognition is distinguished by more time spent interacting with the novel object. Although the exact processes that underlie this 'recognition memory' requires further elucidation, this method has been used to study mutant mice, aging deficits, early developmental influences, nootropic manipulations, teratological drug exposure and novelty seeking.",
"title": ""
},
{
"docid": "3e80dc7319f1241e96db42033c16f6b4",
"text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.",
"title": ""
}
] |
scidocsrr
|
4f5de31779b2804332918701ee19113d
|
Representation Learning of Temporal Dynamics for Skeleton-Based Action Recognition
|
[
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "695af0109c538ca04acff8600d6604d4",
"text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"title": ""
},
{
"docid": "2b6c016395d92ef20c4e316a35a7ecb8",
"text": "Recently, the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and depth visual information, has attracted increasing attentions for a wide range of applications in computer vision. Existing techniques extract hand-tuned features from the RGB and the depth data separately and heuristically fuse them, which would not fully exploit the complementarity of both data sources. In this paper, we introduce an adaptive learning methodology to automatically extract (holistic) spatio-temporal features, simultaneously fusing the RGB and depth information, from RGBD video data for visual recognition tasks. We address this as an optimization problem using our proposed restricted graph-based genetic programming (RGGP) approach, in which a group of primitive 3D operators are first randomly assembled as graph-based combinations and then evolved generation by generation by evaluating on a set of RGBD video samples. Finally the best-performed combination is selected as the (near-)optimal representation for a pre-defined task. The proposed method is systematically evaluated on a new hand gesture dataset, SKIG, that we collected ourselves and the public MSRDailyActivity3D dataset, respectively. Extensive experimental results show that our approach leads to significant advantages compared with state-of-the-art handcrafted and machine-learned features.",
"title": ""
},
{
"docid": "4b33d61fce948b8c7942ca6180765a59",
"text": "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.",
"title": ""
}
] |
[
{
"docid": "64ca99b23c0f901237e7f03aa214bed5",
"text": "and high computational costs are being tackled. Researchers in academic settings as well as in startup companies such as Deep Genomics, launched July 22, 2015, by some of the authors of DeepBind, will increasingly apply deep learning to genome analysis and precision medicine. The goal is to predict the effect of genetic variants— both naturally occurring and introduced by genome editing—on a cell’s regulatory landscape and how this in turn affects disease development. Nicole Rusk ❯❯Deep learning",
"title": ""
},
{
"docid": "577e5f82a0a195b092d7a15df110bd96",
"text": "We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.",
"title": ""
},
{
"docid": "82d06d6f16ef4958676ada7847e7f0de",
"text": "Recent neuroimaging research has linked the task of forming a \"person impression\" to a distinct pattern of neural activation that includes dorsal regions of the medial prefrontal cortex (mPFC). Although this result suggests the distinctiveness of social cognition - the processes that support inferences about the psychological aspects of other people - it remains unclear whether mPFC contributions to this impression formation task were person specific or if they would extend to other stimulus targets. To address this unresolved issue, participants in the current study underwent fMRI scanning while performing impression formation or a control task for two types of target: other people and inanimate objects. Specifically, participants were asked to use experimentally-provided information either to form an impression of a person or an object or to intentionally encode the sequence in which the information was presented. Results demonstrated that activation in an extensive region of dorsal mPFC was greater for impression formation of other people than for all other trial types, suggesting that this region specifically indexes the social-cognitive aspects of impression formation (i.e., understanding the psychological characteristics of another mental agent). These findings underscore the extent to which social cognition relies on distinct neural mechanisms.",
"title": ""
},
{
"docid": "d698ce3df2f1216b7b78237dcecb0df1",
"text": "A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small RF input power conditions. A test circuit of the proposed differential-drive rectifier was fabricated with 0.18 mu m CMOS technology, and the measured performance was compared with those of other types of rectifiers. Dependence of the PCE on the input RF signal frequency, output loading conditions and transistor sizing was also evaluated. At the single-stage configuration, 67.5% of PCE was achieved under conditions of 953 MHz, - 12.5 dBm RF input and 10 KOmega output load. This is twice as large as that of the state-of-the-art rectifier circuit. The peak PCE increases with a decrease in operation frequency and with an increase in output load resistance. In addition, experimental results show the existence of an optimum transistor size in accordance with the output loading conditions. The multi-stage configuration for larger output DC voltage is also presented.",
"title": ""
},
{
"docid": "45071a33abbf7b33ed69d610936a6af7",
"text": "Graphene is a wonder material with many superlatives to its name. It is the thinnest known material in the universe and the strongest ever measured. Its charge carriers exhibit giant intrinsic mobility, have zero effective mass, and can travel for micrometers without scattering at room temperature. Graphene can sustain current densities six orders of magnitude higher than that of copper, shows record thermal conductivity and stiffness, is impermeable to gases, and reconciles such conflicting qualities as brittleness and ductility. Electron transport in graphene is described by a Dirac-like equation, which allows the investigation of relativistic quantum phenomena in a benchtop experiment. This review analyzes recent trends in graphene research and applications, and attempts to identify future directions in which the field is likely to develop.",
"title": ""
},
{
"docid": "b70566b0a6d11faf556b2a29a9144ef8",
"text": "In this work, we use foursquare check-ins to cluster users via topic modeling, a technique commonly used to classify text documents according to latent \"themes\". Here, however, the latent variables which group users can be thought of not as themes but rather as factors which drive check in behaviors, allowing for a qualitative understanding of influences on user check ins. Our model is agnostic of geo-spatial location, time, users' friends on social networking sites and the venue categories-we treat the existence of and intricate interactions between these factors as being latent, allowing them to emerge entirely from the data. We instantiate our model on data from New York and the San Francisco Bay Area and find evidence that the model is able to identify groups of people which are of different types (e.g. tourists), communities (e.g. users tightly clustered in space) and interests (e.g. people who enjoy athletics).",
"title": ""
},
{
"docid": "6c893b6c72f932978a996b6d6283bc02",
"text": "Deep metric learning aims to learn an embedding function, modeled as deep neural network. This embedding function usually puts semantically similar images close while dissimilar images far from each other in the learned embedding space. Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.",
"title": ""
},
{
"docid": "4b999743e8032c8ef1b0a5f63888e832",
"text": "In this paper we describe a novel approach to autonomous dirt road following. The algorithm is able to recognize highly curved roads in cluttered color images quite often appearing in offroad scenarios. To cope with large curvatures we apply gaze control and model the road using two different clothoid segments. A Particle Filter incorporating edge and color intensity information is used to simultaneously detect and track the road farther away from the ego vehicle. In addition the particles are used to generate static road segment estimations in a given look ahead distance. These estimations are predicted with respect to ego motion and fused utilizing Kalman filter techniques to generate a smooth local clothoid segment for lateral control of the vehicle.",
"title": ""
},
{
"docid": "291a1927343797d72f50134b97f73d88",
"text": "This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 × when it is activated.",
"title": ""
},
{
"docid": "5bdbf3fa515da2c49c99740f3f6b420e",
"text": "Bearing failure is one of the foremost causes of breakdowns in rotating machinery and such failure can be catastrophic, resulting in costly downtime. One of the key issues in bearing prognostics is to detect the defect at its incipient stage and alert the operator before it develops into a catastrophic failure. Signal de-noising and extraction of the weak signature are crucial to bearing prognostics since the inherent deficiency of the measuring mechanism often introduces a great amount of noise to the signal. In addition, the signature of a defective bearing is spread across a wide frequency band and hence can easily become masked by noise and low frequency effects. As a result, robust methods are needed to provide more evident information for bearing performance assessment and prognostics. This paper introduces enhanced and robust prognostic methods for rolling element bearing including a wavelet filter based method for weak signature enhancement for fault identification and Self Organizing Map (SOM) based method for performance degradation assessment. The experimental results demonstrate that the bearing defects can be detected at an early stage of development when both optimal wavelet filter and SOM method are used. q 2004 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "1a13a0d13e0925e327c9b151b3e5b32d",
"text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.",
"title": ""
},
{
"docid": "b50c0f5bd7ee7b0fbcc77934a600f7d4",
"text": "Local feature descriptors underpin many diverse applications, supporting object recognition, image registration, database search, 3D reconstruction, and more. The recent phenomenal growth in mobile devices and mobile computing in general has created demand for descriptors that are not only discriminative, but also compact in size and fast to extract and match. In response, a large number of binary descriptors have been proposed, each claiming to overcome some limitations of the predecessors. This paper provides a comprehensive evaluation of several promising binary designs. We show that existing evaluation methodologies are not sufficient to fully characterize descriptors’ performance and propose a new evaluation protocol and a challenging dataset. In contrast to the previous reviews, we investigate the effects of the matching criteria, operating points, and compaction methods, showing that they all have a major impact on the systems’ design and performance. Finally, we provide descriptor extraction times for both general-purpose systems and mobile devices, in order to better understand the real complexity of the extraction task. The objective is to provide a comprehensive reference and a guide that will help in selection and design of the future descriptors.",
"title": ""
},
{
"docid": "79f4951b91c222585abe7452c2a61625",
"text": "This article presents a Hoare-style calculus for a substantial subset of Java Card, which we call Java . In particular, the language includes side-effecting expressions, mutual recursion, dynamic method binding, full exception handling, and static class initialization. The Hoare logic of partial correctness is proved not only sound (w.r.t. our operational semantics of Java, described in detail elsewhere) but even complete. It is the first logic for an object-oriented language that is provably complete. The completeness proof uses a refinement of the Most General Formula approach. The proof of soundness gives new insights into the role of type safety. Further by-products of this work are a new general methodology for handling side-effecting expressions and their results, the discovery of the strongest possible rule of consequence, and a flexible Call rule for mutual recursion. We also give a small but non-trivial application example. All definitions and proofs have been done formally with the interactive theorem prover Isabelle/HOL. This guarantees not only rigorous definitions, but also gives maximal confidence in the results obtained.",
"title": ""
},
{
"docid": "5bde29ce109714f623ae9d69184a8708",
"text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.",
"title": ""
},
{
"docid": "06fca2fd3cdaab1029d447f0e0823184",
"text": "The purpose of the present study was to experimentally assess the effect of cognitive strategies of association and dissociation while running on central nervous activation. A total of 30 long distance runners volunteered for the study. The study protocol consisted on three sessions (scheduled in three different days): (1) maximal incremental treadmill test, (2) associative task session, and (3) dissociative task session. The order of sessions 2 and 3 was counterbalanced. During sessions 2 and 3, participants performed a 55 min treadmill run at moderate intensity. Both, associative and dissociative tasks responses were monitoring and recording in real time through dynamic measure tools. Consequently, was possible to have an objective control of the attentional. Results showed a positive session (exercise+attentional task) effect for central nervous activation. The benefi ts of aerobic exercise at moderate intensity for the performance of self-regulation cognitive tasks are highlighted. The used methodology is proposed as a valid and dynamic option to study cognitions while running in order to overcome the retrospective approach. Research Article",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "b32218abeff9a34c3e89eac76b8c6a45",
"text": "The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.",
"title": ""
},
{
"docid": "cf222e0f90538d150cc45ae30edf696c",
"text": "Workflows are a widely used abstraction for representing large scientific applications and executing them on distributed systems such as clusters, clouds, and grids. However, workflow systems have been largely silent on the question of precisely what environment each task in the workflow is expected to run in. As a result, a workflow may run correctly in the environment in which it was designed, but when moved to another machine, is highly likely to fail due to differences in the operating system, installed applications, available data, and so forth. Lightweight container technology has recently arisen as a potential solution to this problem, by providing a well-defined execution environments at the operating system level. In this paper, we consider how to best integrate container technology into an existing workflow system, using Makeflow, Work Queue, and Docker as examples of current technology. A brief performance study of Docker shows very little overhead in CPU and I/O performance, but significant costs in creating and deleting containers. Taking this into account, we describe four different methods of connecting containers to different points of the infrastructure, and explain several methods of managing the container images that must be distributed to executing tasks. We explore the performance of a large bioinformatics workload on a Docker-enabled cluster, and observe the best configuration to be locally-managed containers that are shared between multiple tasks.",
"title": ""
},
{
"docid": "f0c1a47a3398287bd5910e94f79b3d3b",
"text": "We study the problem of allocating multiple resources to agents with heterogeneous demands. Technological advances such as cloud computing and data centers provide a new impetus for investigating this problem under the assumption that agents demand the resources in fixed proportions, known in economics as Leontief preferences. In a recent paper, Ghodsi et al. [2011] introduced the dominant resource fairness (DRF) mechanism, which was shown to possess highly desirable theoretical properties under Leontief preferences. We extend their results in three directions. First, we show that DRF generalizes to more expressive settings, and leverage a new technical framework to formally extend its guarantees. Second, we study the relation between social welfare and properties such as truthfulness; DRF performs poorly in terms of social welfare, but we show that this is an unavoidable shortcoming that is shared by every mechanism that satisfies one of three basic properties. Third, and most importantly, we study a realistic setting that involves indivisibilities. We chart the boundaries of the possible in this setting, contributing a new relaxed notion of fairness and providing both possibility and impossibility results.",
"title": ""
},
{
"docid": "a15c94c0ec40cb8633d7174b82b70a16",
"text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,",
"title": ""
}
] |
scidocsrr
|
27b24b8c2c1511dae500cbc3e986e3cb
|
DATA MINING IN FINANCE AND ACCOUNTING : A REVIEW OF CURRENT RESEARCH TRENDS
|
[
{
"docid": "113373d6a9936e192e5c3ad016146777",
"text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.",
"title": ""
}
] |
[
{
"docid": "8fd145a23093ec058e4bef628d7fb083",
"text": "This paper presents an Aerosol Jet printed end-fire antenna operating at 24 GHz on a 3-D printed substrate for the first time. A compact quasi-Yagi-Uda antenna is chosen to achieve end-fire radiation pattern, and is optimized to be directly fed by a 50 Ω microstrip transmission line (MTL) with a balun. The partially metalized substrate serves as the ground plane for the mcirostrip line as well as the reflector for the quasi-Yagi-Uda antenna. Additionally, a cavity is integrated in the back of the substrate to provide low loss, high gain and good efficiency. The simulated performance shows 26.4 dB return loss at 24 GHz, 3.32 dBi of peak gain and 57.7% radiation efficiency. The measurements are carried out over a frequency range of 20 – 30 GHz with the return loss of 14.6 dB at 25.8 GHz. The combination of Aerosol Jet printing and 3-D Polyjet printing processes in this paper demonstrates a good advantage of additive manufacturing technology, which allows for highly efficient fabrication of low-profile antennas and other novel RF circuits.",
"title": ""
},
{
"docid": "e5643580e07810f0aaaa29cb7b262d76",
"text": "Modern computer vision algorithms often rely on very large training datasets. However, it is conceivable that a carefully selected subsample of the dataset is sufficient for training. In this paper, we propose a gradient-based importance measure that we use to empirically analyze relative importance of training images in four datasets of varying complexity. We find that in some cases, a small subsample is indeed sufficient for training. For other datasets, however, the relative differences in importance are negligible. These results have important implications for active learning on deep networks. Additionally, our analysis method can be used as a general tool to better understand diversity of training examples in datasets.",
"title": ""
},
{
"docid": "f98d224546769672b12e54d363eba131",
"text": "We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs: changes to either of the pixels can easily lead to changes in descriptor values and compromise their performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements.",
"title": ""
},
{
"docid": "d55aae728991060ed4ba1f9a6b59e2fe",
"text": "Evolutionary algorithms have become robust tool in data processing and modeling of dynamic, complex and non-linear processes due to their flexible mathematical structure to yield optimal results even with imprecise, ambiguity and noise at its input. The study investigates evolutionary algorithms for solving Sudoku task. Various hybrids are presented here as veritable algorithm for computing dynamic and discrete states in multipoint search in CSPs optimization with application areas to include image and video analysis, communication and network design/reconstruction, control, OS resource allocation and scheduling, multiprocessor load balancing, parallel processing, medicine, finance, security and military, fault diagnosis/recovery, cloud and clustering computing to mention a few. Solution space representation and fitness functions (as common to all algorithms) were discussed. For support and confidence model adopted π1=0.2 and π2=0.8 respectively yields better convergence rates – as other suggested value combinations led to either a slower or non-convergence. CGA found an optimal solution in 32 seconds after 188 iterations in 25runs; while GSAGA found its optimal solution in 18seconds after 402 iterations with a fitness progression achieved in 25runs and consequently, GASA found an optimal solution 2.112seconds after 391 iterations with fitness progression after 25runs respectively.",
"title": ""
},
{
"docid": "80ddc34ac75a9d2f6b6fd59446a62243",
"text": "Yuze Niu 1,2, Yacong Zhang 1,2,*, Zhuo Zhang 1,2, Miaomiao Fan 1, Wengao Lu 1,2 and Zhongjian Chen 1,2 1 Key Laboratory of Microelectronic Devices and Circuits, Department of Microelectronics, Peking University, Beijing 100871, China; yzniu@pku.edu.cn (Y.N.); zhangzhuo1658@163.com (Z.Z.); mayunfmm@163.com (M.F.); wglu@pku.edu.cn (W.L.); chenzj@pku.edu.cn (Z.C.) 2 Peking University Information Technology Institute (Tianjin Binhai), Tianjin 300452, China * Correspondence: zhangyc@pku.edu.cn",
"title": ""
},
{
"docid": "4ac26e974e2d3861659323ae2aa7323c",
"text": "Episacral lipoma is a small, tender subcutaneous nodule primarily occurring over the posterior iliac crest. Episacral lipoma is a significant and treatable cause of acute and chronic low back pain. Episacral lipoma occurs as a result of tears in the thoracodorsal fascia and subsequent herniation of a portion of the underlying dorsal fat pad through the tear. This clinical entity is common, and recognition is simple. The presence of a painful nodule with disappearance of pain after injection with anaesthetic, is diagnostic. Medication and physical therapy may not be effective. Local injection of the nodule with a solution of anaesthetic and steroid is effective in treating the episacral lipoma. Here we describe 2 patients with painful nodules over the posterior iliac crest. One patient complained of severe lower back pain radiating to the left lower extremity and this patient subsequently underwent disc operation. The other patient had been treated for greater trochanteric pain syndrome. In both patients, symptoms appeared to be relieved by local injection of anaesthetic and steroid. Episacral lipoma should be considered during diagnostic workup and in differential diagnosis of acute and chronic low back pain.",
"title": ""
},
{
"docid": "4ad09f27848c5f47de5bb58a522c28a3",
"text": "The rapid development of deep learning are enabling a plenty of novel applications such as image and speech recognition for embedded systems, robotics or smart wearable devices. However, typical deep learning models like deep convolutional neural networks (CNNs) consume so much on-chip storage and high-throughput compute resources that they cannot be easily handled by mobile or embedded devices with thrifty silicon and power budget. In order to enable large CNN models in mobile or more cutting-edge devices for IoT or cyberphysics applications, we proposed an efficient on-chip memory architecture for CNN inference acceleration, and showed its application to our in-house general-purpose deep learning accelerator. The redesigned on-chip memory subsystem, Memsqueezer, includes an active weight buffer set and data buffer set that embrace specialized compression methods to reduce the footprint of CNN weight and data set respectively. The Memsqueezer buffer can compress the data and weight set according to their distinct features, and it also includes a built-in redundancy detection mechanism that actively scans through the work-set of CNNs to boost their inference performance by eliminating the data redundancy. In our experiment, it is shown that the CNN accelerators with Memsqueezer buffers achieves more than 2x performance improvement and reduces 80% energy consumption on average over the conventional buffer design with the same area budget.",
"title": ""
},
{
"docid": "7cd091555dd870cc1a71a4318bb5ff8d",
"text": "This paper presents the design and simulation of a wideband, medium gain, light weight, wide bandwidth pyramidal horn antenna feed for microwave applications. The horn was designed using approximation method to calculate the gain in mat lab and simulated using CST microwave studio. The proposed antenna operates within 1-2 GHz (L-band). The horn is supported by a rectangular wave guide. It is linearly polarized and shows wide bandwidth with a gain of 15.3dB. The horn is excited with the monopole which is loaded with various top hat loading such as rectangular disc, circular disc, annular disc, L-type, T-type, Cone shape, U-shaped plates etc. and checked their performances for return loss as well as bandwidth. The circular disc and annular ring gives the low return loss and wide bandwidth as well as low VSWR. The annular ring gave good VSWR and return loss compared to the circular disc. The far field radiation pattern is obtained as well as Efield & H-field analysis for L-band pyramidal horn has been observed, simulated and optimized using CST Microwave Studio. The simulation results show that the pyramidal horn structure exhibits low VSWR as well as good radiation pattern over L-band.",
"title": ""
},
{
"docid": "a04e2df0d6ca5eae1db6569b43b897bd",
"text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8f597b84bf40474b852083c9abb78620",
"text": "The aim of this study was to re-examine individuals with gender identity disorder after as long a period of time as possible. To meet the inclusion criterion, the legal recognition of participants' gender change via a legal name change had to date back at least 10 years. The sample comprised 71 participants (35 MtF and 36 FtM). The follow-up period was 10-24 years with a mean of 13.8 years (SD = 2.78). Instruments included a combination of qualitative and quantitative methods: Clinical interviews were conducted with the participants, and they completed a follow-up questionnaire as well as several standardized questionnaires they had already filled in when they first made contact with the clinic. Positive and desired changes were determined by all of the instruments: Participants reported high degrees of well-being and a good social integration. Very few participants were unemployed, most of them had a steady relationship, and they were also satisfied with their relationships with family and friends. Their overall evaluation of the treatment process for sex reassignment and its effectiveness in reducing gender dysphoria was positive. Regarding the results of the standardized questionnaires, participants showed significantly fewer psychological problems and interpersonal difficulties as well as a strongly increased life satisfaction at follow-up than at the time of the initial consultation. Despite these positive results, the treatment of transsexualism is far from being perfect.",
"title": ""
},
{
"docid": "44faf0dd15da256cdbf5bf58e1b5a775",
"text": "We describe a practical path-planning algorithm that generates smooth paths for an autonomous vehicle operating in an unknown environment, where obstacles are detected online by the robot’s sensors. This work was motivated by and experimentally validated in the 2007 DARPA Urban Challenge, where robotic vehicles had to autonomously navigate parking lots. Our approach has two main steps. The first step uses a variant of the well-known A* search algorithm, applied to the 3D kinematic state space of the vehicle, but with a modified state-update rule that captures the continuous state of the vehicle in the discrete nodes of A* (thus guaranteeing kinematic feasibility of the path). The second step then improves the quality of the solution via numeric non-linear optimization, leading to a local (and frequently global) optimum. The path-planning algorithm described in this paper was used by the Stanford Racing Teams robot, Junior, in the Urban Challenge. Junior demonstrated flawless performance in complex general path-planning tasks such as navigating parking lots and executing U-turns on blocked roads, with typical fullcycle replaning times of 50–300ms. Introduction and Related Work We address the problem of path planning for an autonomous vehicle operating in an unknown environment. We assume the robot has adequate sensing and localization capability and must replan online while incrementally building an obstacle map. This scenario was motivated, in part, by the DARPA Urban Challenge, in which vehicles had to freely navigate parking lots. The path-planning algorithm described below was used by the Stanford Racing Team’s robot, Junior in the Urban Challenge (DARPA 2007). Junior (Figure 1) demonstrated flawless performance in complex general path-planning tasks—many involving driving in reverse—such as navigating parking lots, executing Uturns, and dealing with blocked roads and intersections with typical full-cycle replanning times of 50–300ms on a modern PC. One of the main challenges in developing a practical path planner for free navigation zones arises from the fact that the space of all robot controls—and hence trajectories—is continuous, leading to a complex continuous-variable optimization landscape. Much of prior work on search algorithms for Copyright c © 2008, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Junior, our entry in the DARPA Urban Challenge, was used in all experiments. Junior is equipped with several LIDAR and RADAR units, and a high-accuracy inertial measurement system. path planning (Ersson and Hu 2001; Koenig and Likhachev 2002; Ferguson and Stentz 2005; Nash et al. 2007) yields fast algorithms for discrete state spaces, but those algorithms tend to produce paths that are non-smooth and do not generally satisfy the non-holonomic constraints of the vehicle. An alternative approach that guarantees kinematic feasibility is forward search in continuous coordinates, e.g., using rapidly exploring random trees (RRTs) (Kavraki et al. 1996; LaValle 1998; Plaku, Kavraki, and Vardi 2007). The key to making such continuous search algorithms practical for online implementations lies in an efficient guiding heuristic. Another approach is to directly formulate the path-planning problem as a non-linear optimization problem in the space of controls or parametrized curves (Cremean et al. 2006), but in practice guaranteeing fast convergence of such programs is difficult due to local minima. Our algorithm builds on the existing work discussed above, and consists of two main phases. The first step uses a heuristic search in continuous coordinates that guarantees kinematic feasibility of computed trajectories. While lacking theoretical optimality guarantees, in practice this first",
"title": ""
},
{
"docid": "5c32ca62b8ffcc8dd59f424e02a542cd",
"text": "We develop a systematic approach for analyzing client-server applications that aim to hide sensitive user data from untrusted servers. We then apply it to Mylar, a framework that uses multi-key searchable encryption (MKSE) to build Web applications on top of encrypted data.\n We demonstrate that (1) the Popa-Zeldovich model for MKSE does not imply security against either passive or active attacks; (2) Mylar-based Web applications reveal users' data and queries to passive and active adversarial servers; and (3) Mylar is generically insecure against active attacks due to system design flaws. Our results show that the problem of securing client-server applications against actively malicious servers is challenging and still unsolved.\n We conclude with general lessons for the designers of systems that rely on property-preserving or searchable encryption to protect data from untrusted servers.",
"title": ""
},
{
"docid": "9e3e23b3918d46738a3e03ccdf60c4df",
"text": "A 33-year-old man was found 20 Cm upper of the floor, compressed by rubbish container in the elevator in an unusually awkward position. The scene investigation corresponded exactly with the localization of the injuries found in the victim. This is a case of death by thorax compression without other fatal factors in which the force causing the chest compression was distinctly determined by the autopsy and scene investigation as accidental traumatic asphyxia.",
"title": ""
},
{
"docid": "d71c2f3d1a10b5a2cb33247129bfd8e0",
"text": "PURPOSE OF REVIEW\nTo review the current practice in the field of auricular reconstruction and to highlight the recent advances reported in the medical literature.\n\n\nRECENT FINDINGS\nThe majority of surgeons who perform auricular reconstruction continue to employ the well-established techniques developed by Brent and Nagata. Surgery takes between two and four stages, with the initial stage being construction of a framework of autogenous rib cartilage which is implanted into a subcutaneous pocket. Several modifications of these techniques have been reported. More recently, synthetic frameworks have been employed instead of autogenous rib cartilage. For this procedure, the implant is generally covered with a temporoparietal flap and a skin graft at the first stage of surgery. Tissue engineering is a rapidly developing field, and there have been several articles related to the field of auricular reconstruction. These show great potential to offer a solution to the challenge associated with construction of a viable autogenous cartilage framework, whilst avoiding donor-site morbidity.\n\n\nSUMMARY\nThis article gives an overview of the current practice in the field of auricular reconstruction and summarizes the recent surgical developments and relevant tissue engineering research.",
"title": ""
},
{
"docid": "1c22ee7dc93c35b45a817866c822f0e7",
"text": "Despite the recent advances in test generation, fully automatic software testing remains a dream: Ultimately, any generated test input depends on a test oracle that determines correctness, and, except for generic properties such as “the program shall not crash”, such oracles require human input in one form or another. CrowdSourcing is a recently popular technique to automate computations that cannot be performed by machines, but only by humans. A problem is split into small chunks, that are then solved by a crowd of users on the Internet. In this paper we investigate whether it is possible to exploit CrowdSourcing to solve the oracle problem: We produce tasks asking users to evaluate CrowdOracles - assertions that reflect the current behavior of the program. If the crowd determines that an assertion does not match the behavior described in the code documentation, then a bug has been found. Our experiments demonstrate that CrowdOracles are a viable solution to automate the oracle problem, yet taming the crowd to get useful results is a difficult task.",
"title": ""
},
{
"docid": "a631dc73c63afd43affb9c9b1df07755",
"text": "Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special-purpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering results are reported on the Reuters dataset. Our implementation is publicly available at https://github.com/kstant0725/SpectralNet.",
"title": ""
},
{
"docid": "60c03017f7254c28ba61348d301ae612",
"text": "Code flaws or vulnerabilities are prevalent in software systems and can potentially cause a variety of problems including deadlock, information loss, or system failure. A variety of approaches have been developed to try and detect the most likely locations of such code vulnerabilities in large code bases. Most of them rely on manually designing features (e.g. complexity metrics or frequencies of code tokens) that represent the characteristics of the code. However, all suffer from challenges in sufficiently capturing both semantic and syntactic representation of source code, an important capability for building accurate prediction models. In this paper, we describe a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features in code. Our evaluation on 18 Android applications demonstrates that the prediction power obtained from our learned features is equal or even superior to what is achieved by state of the art vulnerability prediction models: 3%–58% improvement for within-project prediction and 85% for cross-project prediction.",
"title": ""
},
{
"docid": "38173a209cd61ef7d2550d7dad5d2e93",
"text": "FinFET device has been proposed as a promising substitute for the traditional bulk CMOS-based device at the nanoscale, due to its extraordinary properties such as improved channel controllability, high ON/OFF current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. In addition, the near-ideal subthreshold behavior indicates the potential application of FinFET circuits in the near-threshold supply voltage regime, which consumes an order of magnitude less energy than the regular strong-inversion circuits operating in the super-threshold supply voltage regime. This paper presents a design flow of creating standard cells by using the FinFET 5nm technology node, including both near-threshold and super-threshold operations, and building a Liberty-format standard cell library. The circuit synthesis results of various combinational and sequential circuits based on the 5nm FinFET standard cell library show up to 40X circuit speed improvement and three orders of magnitude energy reduction compared to those of 45nm bulk CMOS technology.",
"title": ""
},
{
"docid": "42f7b11d84110d124a23cdd34545bb93",
"text": "Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-toend models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What’s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.",
"title": ""
}
] |
scidocsrr
|
b9a83faba440f02d5853082d0604fad5
|
Image denoising with norm weighted fusion estimators
|
[
{
"docid": "a19c27371c6bf366fddabc2fd3f277b7",
"text": "Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.",
"title": ""
},
{
"docid": "cda19d99a87ca769bb915167f8a842e8",
"text": "Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.",
"title": ""
}
] |
[
{
"docid": "4aa6103dca92cf8663139baf93f78a80",
"text": "We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate sub-shots in scenes for summarization.",
"title": ""
},
{
"docid": "0ea92e1f3071ae469cc97e430e4591bb",
"text": "Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.",
"title": ""
},
{
"docid": "71b5a4d02be14868302f1b60d0a26484",
"text": "In cloud computing, data owners host their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, this new paradigm of data hosting service also introduces new security challenges, which requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking methods can only serve for static archive data and, thus, cannot be applied to the auditing service since the data in the cloud can be dynamically updated. Thus, an efficient and secure dynamic auditing protocol is desired to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems and propose an efficient and privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient and provably secure in the random oracle model. We further extend our auditing protocol to support batch auditing for both multiple owners and multiple clouds, without using any trusted organizer. The analysis and simulation results show that our proposed auditing protocols are secure and efficient, especially it reduce the computation cost of the auditor.",
"title": ""
},
{
"docid": "c04cc8c930b534d57f729d9e53fd283b",
"text": "This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of each language of the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular due to the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It is studied how the indices of synthesis and fusion could be used as practical tools in monoand cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies done in different languages on the effects of morphology and stemming in IR.",
"title": ""
},
{
"docid": "12e338b699fd5747afdb93ba07c3a672",
"text": "Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.",
"title": ""
},
{
"docid": "5c4a81dd06b5c80ba7c32a9ac1673a4f",
"text": "We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).",
"title": ""
},
{
"docid": "8e7c2943eb6df575bf847cd67b6424dc",
"text": "Today, money laundering poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché, of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting money laundering activities. Within the scope of a collaboration project for the purpose of developing a new solution for the anti-money laundering Units in an international investment bank, we proposed a simple and efficient data mining-based solution for anti-money laundering. In this paper, we present this solution developed as a tool and show some preliminary experiment results with real transaction datasets.",
"title": ""
},
{
"docid": "6961b34ae6e5043be5f777dbd7818ebf",
"text": "Sign language is the communication medium for the deaf and the mute people. It uses hand gestures along with the facial expressions and the body language to convey the intended message. This paper proposes a novel approach of interpreting the sign language using the portable smart glove. LED-LDR pair on each finger senses the signing gesture and couples the analog voltage to the microcontroller. The microcontroller MSP430G2553 converts these analog voltage values to digital samples and the ASCII code of the letter gestured is wirelessly transmitted using the ZigBee. Upon reception, the letter corresponding to the received ASCII code is displayed on the computer and the corresponding audio is played.",
"title": ""
},
{
"docid": "db31a02d996b0a36d0bf215b7b7e8354",
"text": "This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed and recognize the most contributing and important frequency signatures at different levels of task familiarity.",
"title": ""
},
{
"docid": "41353a12a579f72816f1adf3cba154dd",
"text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.",
"title": ""
},
{
"docid": "9009f20f639de20d28ba01fac60db9d0",
"text": "We propose strategies for selecting a good neural network architecture for modeling any spe-ciic data set. Our approach involves eeciently searching the space of possible architectures and selecting a \\best\" architecture based on estimates of generalization performance. Since an exhaustive search over the space of architectures is computationally infeasible, we propose heuristic strategies which dramatically reduce the search complexity. These employ directed search algorithms, including selecting the number of nodes via sequential network construction (SNC), sensitivity based pruning (SBP) of inputs, and optimal brain damage (OBD) pruning for weights. A selection criterion, the estimated generalization performance or prediction risk, is used to guide the heuristic search and to choose the nal network. Both predicted squared error (PSE) and nonlinear cross{validation (NCV) are used for estimating the prediction risk from the available data. We apply these heuristic search and prediction risk estimation techniques to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by a limited set of data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture.",
"title": ""
},
{
"docid": "720b7ede75f47e9ce4dc13b1876dbf33",
"text": "The organization of lateral septal connections has been re-examined with respect to its newly defined subdivisions, using anterograde (PHAL) and retrograde (fluorogold) axonal tracer methods. The results confirm that progressively more ventral transverse bands in the hippocampus (defined by the orientation of the trisynaptic circuit) innervate progressively more ventral, transversely oriented sheets in the lateral septum. In addition, hippocampal field CA3 projects selectively to the caudal part of the lateral septal nucleus, which occupies topologically lateral regions of the transverse sheets, whereas field CA1 and the subiculum project selectively to the rostral and ventral parts of the lateral septal nucleus, which occupy topologically medial regions of the transverse sheets. Finally, the evidence suggests that progressively more ventral hippocampal bands innervate progressively thicker lateral septal sheets. In contrast, ascending inputs to the lateral septum appear to define at least 20 vertically oriented bands or subdivisions arranged orthogonal to the hippocampal input (Risold, P.Y. and Swanson, L.W., Chemoarchitecture of the rat lateral septal nucleus, Brain Res. Rev., 24 (1997) 91-113). Hypothalamic nuclei forming parts of behavior-specific subsystems share bidirectional connections with specific subdivisions of the lateral septal nucleus (especially the rostral part), suggesting that specific domains in the hippocampus may influence specific hypothalamic behavioral systems. In contrast, the caudal part of the lateral septal nucleus projects to the lateral hypothalamus and to the supramammillary nucleus, which projects back to the hippocampus and receives its major inputs from brainstem cell groups thought to regulate behavioral state. The neural system mediating defensive behavior shows these features rather clearly, and what is known about its organization is discussed in some detail.",
"title": ""
},
{
"docid": "4f6638d19d3c4ba3ac970007e41a3682",
"text": "A novel learning framework is proposed for anomalous behaviour detection in a video surveillance scenario, so that a classifier which distinguishes between normal and anomalous behaviour patterns can be incrementally trained with the assistance of a human operator. We consider the behaviour of pedestrians in terms of motion trajectories, and parametrise these trajectories using the control points of approximating cubic spline curves. This paper demonstrates an incremental semi-supervised one-class learning procedure in which unlabelled trajectories are combined with occasional examples of normal behaviour labelled by a human operator. This procedure is found to be effective on two different datasets, indicating that a human operator could potentially train the system to detect anomalous behaviour by providing only occasional interventions (a small percentage of the total number of observations).",
"title": ""
},
{
"docid": "2cebd2fd12160d2a3a541989293f10be",
"text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.",
"title": ""
},
{
"docid": "5f35ed926a267dc9f80d110e87c06e5a",
"text": "Face detection is one of the most studied topics in computer vision literature, not only because of the challenging nature of face as an object, but also due to the countless applications that require the application of face detection as a first step. During the past 15 years, tremendous progress has been made due to the availability of data in unconstrained capture conditions (so-called ’in-thewild’) through the Internet, the effort made by the community to develop publicly available benchmarks, as well as the progress in the development of robust computer vision algorithms. In this paper, we survey the recent advances in real-world face detection techniques, beginning with the seminal Viola-Jones face detector methodology. These techniques are roughly categorized into two general schemes: rigid templates, learned mainly via boosting based methods or by the application of deep neural networks, and deformable models that describe the face by its parts. Representative methods will be described in detail, along with a few additional successful methods that we briefly go through at the end. Finally, we survey the main databases used for the evaluation of face detection algorithms and recent benchmarking efforts, and discuss the future of face detection. c © 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "e7374affb280ac8c24d45f99a8b62c98",
"text": "Deep generative models (DGMs) can effectively capture the underlying distributions of complex data by learning multilayered representations and performing inference. However, it is relatively insufficient to boost the discriminative ability of DGMs. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.",
"title": ""
},
{
"docid": "56b8be88bcd56ce8fd730947bb9437fc",
"text": "Cross site scripting (XSS) is one of the major threats to the web application security, where the research is still underway for an effective and useful way to analyse the source code of web application and removes this threat. XSS occurs by injecting the malicious scripts into web application and it can lead to significant violations at the site or for the user. Several solutions have been recommended for their detection. However, their results do not appear to be effective enough to resolve the issue. This paper recommended a methodology for the detection of XSS from the PHP web application using genetic algorithm (GA) and static analysis. The methodology enhances the earlier approaches of determining XSS vulnerability in the web application by eliminating the infeasible paths from the control flow graph (CFG). This aids in reducing the false positive rate in the outcomes. The results of the experiments indicated that our methodology is more effectual in detecting XSS vulnerability from the PHP web application compared to the earlier studies, in terms of the false positive rates and the concrete susceptible paths determined by GA Generator. Keywords—Web Application Security; Security Vulnerability; Web Testing; Cross Site Scripting; Genetic Algorithm",
"title": ""
},
{
"docid": "83580c373e9f91b021d90f520011a5da",
"text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.",
"title": ""
}
] |
scidocsrr
|
d9051f257ea8a30b1eb58b1fbdfd8261
|
Video2vec: Learning semantic spatio-temporal embeddings for video representation
|
[
{
"docid": "78bd1c7ea28a4af60991b56ccd658d7f",
"text": "The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"title": ""
}
] |
[
{
"docid": "1a101ae3faeaa775737799c2324ef603",
"text": "in recent years, greenhouse technology in agriculture is to automation, information technology direction with the IOT (internet of things) technology rapid development and wide application. In the paper, control networks and information networks integration of IOT technology has been studied based on the actual situation of agricultural production. Remote monitoring system with internet and wireless communications combined is proposed. At the same time, taking into account the system, information management system is designed. The collected data by the system provided for agricultural research facilities.",
"title": ""
},
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f7a36f939cbe9b1d403625c171491837",
"text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.",
"title": ""
},
{
"docid": "10ef85ecd94f8ef30ee5e3cadc3697eb",
"text": "Spam in Online Social Networks (OSNs) is a systemic problem that imposes a threat to these services in terms of undermining their value to advertisers and potential investors, as well as negatively affecting users' engagement. In this work, we present a unique analysis of spam accounts in OSNs viewed through the lens of their behavioral characteristics (i.e., profile properties and social interactions). Our analysis includes over 100 million tweets collected over the course of one month, generated by approximately 30 million distinct user accounts, of which over 7% are suspended or removed due to abusive behaviors and other violations. We show that there exist two behaviorally distinct categories of twitter spammers and that they employ different spamming strategies. The users in these two categories demonstrate different individual properties as well as social interaction patterns. As the Twitter spammers continuously keep creating newer accounts upon being caught, a behavioral understanding of their spamming behavior will be vital in the design of future social media defense mechanisms.",
"title": ""
},
{
"docid": "08804b3859d70c6212bba05c7e792f9a",
"text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.",
"title": ""
},
{
"docid": "4e20e28f7da8c76a6868ed7167a49c1b",
"text": "Nature enthused algorithms are the most potent for optimization. Cuckoo Search (CS) algorithm is one such algorithm which is efficient in solving optimization problems in varied fields. This paper appraises the basic concepts of cuckoo search algorithm and its application towards the segmentation of brain tumor from the Magnetic Resonance Images (MRI). The human brain is the most complex structure where identifying the tumor like diseases are extremely challenging because differentiating the components of the brain is complex. The tumor may sometimes occur with the same intensity of normal tissues. The tumor, edema, blood clot and some part of brain tissues appear as same and make the work of the radiologist more complex. In general the brain tumor is detected by radiologist through a comprehensive analysis of MR images, which takes substantially a longer time. The key inventiveness is to develop a diagnostic system using the best optimization technique called the cuckoo search, that would assist the radiologist to have a second opinion regarding the presence or absence of tumor. This paper explores the CS algorithm, performing a profound study of its search mechanisms to discover how it is efficient in detecting tumors and compare the results with the other commonly used optimization algorithms.",
"title": ""
},
{
"docid": "b5070b6b55a7fe64fc18993ad9cd7325",
"text": "STUDY OBJECTIVE\nto determine the efficacy of fish-oil dietary supplements in active rheumatoid arthritis and their effect on neutrophil leukotriene levels.\n\n\nDESIGN\nnonrandomized, double-blinded, placebo-controlled, crossover trial with 14-week treatment periods and 4-week washout periods.\n\n\nSETTING\nacademic medical center, referral-based rheumatology clinic.\n\n\nPATIENTS\nforty volunteers with active, definite, or classical rheumatoid arthritis. Five patients dropped out, and two were removed for noncompliance.\n\n\nINTERVENTIONS\ntreatment with nonsteroidal anti-inflammatory drugs, slow-acting antirheumatic drugs, and prednisone was continued. Twenty-one patients began with a daily dosage of 2.7 g of eicosapaentanic acid and 1.8 g of docosahexenoic acid given in 15 MAX-EPA capsules (R.P. Scherer, Clearwater, Florida), and 19 began with identical-appearing placebos. The background diet was unchanged.\n\n\nMEASUREMENTS AND MAIN RESULTS\nthe following results favored fish oil placebo after 14 weeks: mean time to onset of fatigue improved by 156 minutes (95% confidence interval, 1.2 to 311.0 minutes), and number of tender joints decreased by 3.5 (95% Cl, -6.0 to -1.0). Other clinical measures favored fish oil as well but did reach statistical significance. Neutrophil leukotriene B4 production was correlated with the decrease in number of tender joints (Spearman rank correlation r=0.53; p less than 0.05). There were no statistically significant differences in hemoglobin level, sedimentation rate, or presence of rheumatoid factor or in patient-reported adverse effects. An effect from the fish oil persisted beyond the 4-week washout period.\n\n\nCONCLUSIONS\nfish-oil ingestion results in subjective alleviation of active rheumatoid arthritis and reduction in neutrophil leukotriene B4 production. Further studies are needed to elucidate mechanisms of action and optimal dose and duration of fish-oil supplementation.",
"title": ""
},
{
"docid": "3f5eed1f718e568dc3ba9abbcd6bfedd",
"text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"title": ""
},
{
"docid": "cae269a1eee20846aa2ea83cbf1d0ecc",
"text": "Metformin has utility in cancer prevention and treatment, though the mechanisms for these effects remain elusive. Through genetic screening in C. elegans, we uncover two metformin response elements: the nuclear pore complex (NPC) and acyl-CoA dehydrogenase family member-10 (ACAD10). We demonstrate that biguanides inhibit growth by inhibiting mitochondrial respiratory capacity, which restrains transit of the RagA-RagC GTPase heterodimer through the NPC. Nuclear exclusion renders RagC incapable of gaining the GDP-bound state necessary to stimulate mTORC1. Biguanide-induced inactivation of mTORC1 subsequently inhibits growth through transcriptional induction of ACAD10. This ancient metformin response pathway is conserved from worms to humans. Both restricted nuclear pore transit and upregulation of ACAD10 are required for biguanides to reduce viability in melanoma and pancreatic cancer cells, and to extend C. elegans lifespan. This pathway provides a unified mechanism by which metformin kills cancer cells and extends lifespan, and illuminates potential cancer targets. PAPERCLIP.",
"title": ""
},
{
"docid": "6cf3f0b1cb7a687d0c04dc91c574cda8",
"text": "In recent years, crowdsourcing has become essential in a wide range of Web applications. One of the biggest challenges of crowdsourcing is the quality of crowd answers as workers have wide-ranging levels of expertise and the worker community may contain faulty workers. Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required. Validation is typically conducted by experts, whose availability is limited and who incur high costs. Therefore, we develop a probabilistic model that helps to identify the most beneficial validation questions in terms of both, improvement of result correctness and detection of faulty workers. Our approach allows us to guide the expert's work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 50% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 20\\% of the questions.",
"title": ""
},
{
"docid": "89a04e656c8e42a78363a5087771b58d",
"text": "Analyzing the security of Wearable Internet-of-Things (WIoT) devices is considered a complex task due to their heterogeneous nature. In addition, there is currently no mechanism that performs security testing for WIoT devices in different contexts. In this article, we propose an innovative security testbed framework targeted at wearable devices, where a set of security tests are conducted, and a dynamic analysis is performed by realistically simulating environmental conditions in which WIoT devices operate. The architectural design of the proposed testbed and a proof-of-concept, demonstrating a preliminary analysis and the detection of context-based attacks executed by smartwatch devices, are presented.",
"title": ""
},
{
"docid": "9b018c07a07a9cf5656f853f71d72d14",
"text": "Generic Steganalysis aims to detect the presence of covert communication by identifying the given test data as stego / cover media. Thresholded adjacent pixel differences using different scan paths have been used to highlight feeble embedding artifacts created out of a low rate embedding process. The scan paths normally made use of in the embedding process have been utilized for a steganalytic scheme. A co occurrence matrix derived from thresholded adjacent pixel differences serves as the feature vector aiding detection of stego images carrying very minimal payloads.",
"title": ""
},
{
"docid": "3cf81ab6772fdfb6471f6e711d8c5b90",
"text": "Data center networks usually employ the scale-out model to provide high bisection bandwidth for applications. A large amount of data is required to be transferred frequently between servers across multiple paths. However, traditional load balancing algorithms like equal-cost multi-path routing are not suitable for rapidly varying traffic in data center networks. Based on the special data center topologies and traffic characteristics, researchers have recently proposed some novel traffic scheduling mechanisms to balance traffic. In this paper, we present a comprehensive survey of recent solutions for load balancing in data center networks. First, recently proposed data center network topologies and the studies of traffic characteristics are introduced. Second, the definition of the load-balancing problem is described. Third, we analyze the differences between data center load balancing mechanisms and traditional Internet traffic scheduling. Then, we present an in-depth overview of recent data center load balancing mechanisms. Finally, we analyze the performance of these solutions and discuss future research directions.",
"title": ""
},
{
"docid": "bc6cbf7da118c01d74914d58a71157ac",
"text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.",
"title": ""
},
{
"docid": "99d17b558e4ecbcb4cb63d90a9ce2b2d",
"text": "PURPOSE\nManitoba Oculotrichoanal (MOTA) syndrome is an autosomal recessive disorder present in First Nations families that is characterized by ocular (cryptophthalmos), facial, and genital anomalies. At the commencement of this study, its genetic basis was undefined.\n\n\nMETHODS\nHomozygosity analysis was employed to map the causative locus using DNA samples from four probands of Cree ancestry. After single nucleotide polymorphism (SNP) genotyping, data were analyzed and exported to PLINK to identify regions identical by descent (IBD) and common to the probands. Candidate genes within and adjacent to the IBD interval were sequenced to identify pathogenic variants, with analyses of potential deletions or duplications undertaken using the B-allele frequency and log(2) ratio of SNP signal intensity.\n\n\nRESULTS\nAlthough no shared IBD region >1 Mb was evident on preliminary analysis, adjusting the criteria to permit the detection of smaller homozygous IBD regions revealed one 330 Kb segment on chromosome 9p22.3 present in all 4 probands. This interval comprising 152 SNPs, lies 16 Kb downstream of FRAS1-related extracellular matrix protein 1 (FREM1), and no copy number variations were detected either in the IBD region or FREM1. Subsequent sequencing of both genes in the IBD region, followed by FREM1, did not reveal any mutations.\n\n\nCONCLUSIONS\nThis study illustrates the utility of studying geographically isolated populations to identify genomic regions responsible for disease through analysis of small numbers of affected individuals. The location of the IBD region 16 kb from FREM1 suggests the phenotype in these patients is attributable to a variant outside of FREM1, potentially in a regulatory element, whose identification may prove tractable to next generation sequencing. In the context of recent identification of FREM1 coding mutations in a proportion of MOTA cases, characterization of such additional variants offers scope both to enhance understanding of FREM1's role in cranio-facial biology and may facilitate genetic counselling in populations with high prevalences of MOTA to reduce the incidence of this disorder.",
"title": ""
},
{
"docid": "527e750a6047100cba1f78a3036acb9b",
"text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.",
"title": ""
},
{
"docid": "95f1862369f279f20fc1fb10b8b41ea8",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted , or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Intrusion detection in wireless ad-hoc networks / editors, Nabendu Chaki and Rituparna Chaki. pages cm Includes bibliographical references and index. Contents Preface ix a b o u t t h e e d i t o r s xi c o n t r i b u t o r s xiii chaP t e r 1 intro d u c t i o n 1 Nova ru N De b , M a N a l i CH a k r a bor T y, a N D N a beN Du CH a k i chaP t e r 2 a r c h i t e c t u r e a n d o r g a n i z at i o n is s u e s 43 M a N a l i CH a k r a bor T y, Nova ru N De b , De bDu T Ta ba r M a N roy, a N D r i T u pa r N a CH a k i chaP t e r 3 routin g f o r …",
"title": ""
},
{
"docid": "77773ddeb723473343334694ccd7e42b",
"text": "The increasing installation of distributed energy resources during the last years has lead to a fundamental change of the power system structure. As a consequence, utility operators are faced with new challenges in grid planning and operation. New and intelligent approaches - like smart grids - show promising results in increasing the hosting capacity for distributed and renewable resources. Standardized automation, control, and communication systems are important keys to realize such intelligent methods. In this paper, a standard-based control approach for distributed energy resources is introduced and implemented. It uses the IEC 61850 interoperability approach as well as the IEC 61499 reference model for distributed automation. Elementary implementation guidelines are provided to handle the hierarchical architecture of distributed control applications. In order to show the advantages of the proposed approach, a simulation example and a laboratory test are demonstrated using a prototypical open-source-based implementation.",
"title": ""
},
{
"docid": "55ce1bccc3d7b71aab416a82b7c3edf9",
"text": "Hypervisors use software switches to steer packets to and from virtual machines (VMs). These switches frequently need upgrading and customization—to support new protocol headers or encapsulations for tunneling and overlays, to improve measurement and debugging features, and even to add middlebox-like functions. Software switches are typically based on a large body of code, including kernel code, and changing the switch is a formidable undertaking requiring domain mastery of network protocol design and developing, testing, and maintaining a large, complex codebase. Changing how a software switch forwards packets should not require intimate knowledge of its implementation. Instead, it should be possible to specify how packets are processed and forwarded in a high-level domain-specific language (DSL) such as P4, and compiled to run on a software switch. We present PISCES, a software switch derived from Open vSwitch (OVS), a hard-wired hypervisor switch, whose behavior is customized using P4. PISCES is not hard-wired to specific protocols; this independence makes it easy to add new features. We also show how the compiler can analyze the high-level specification to optimize forwarding performance. Our evaluation shows that PISCES performs comparably to OVS and that PISCES programs are about 40 times shorter than equivalent changes to OVS source code.",
"title": ""
}
] |
scidocsrr
|
0599c7ce355ec7246b139a7bab39d91e
|
Real-Time Bidding Benchmarking with iPinYou Dataset
|
[
{
"docid": "d8982dd146a28c7d2779c781f7110ed5",
"text": "We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"title": ""
},
{
"docid": "c77fad43abe34ecb0a451a3b0b5d684e",
"text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"title": ""
},
{
"docid": "3f90af944ed7603fa7bbe8780239116a",
"text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.",
"title": ""
}
] |
[
{
"docid": "6ad07075bdeff6e662b3259ba39635be",
"text": "We discuss a new deblurring problems in this paper. Focus measurements play a fundamental role in image processing techniques. Most traditional methods neglect spatial information in the frequency domain. Therefore, this study analyzed image data in the frequency domain to determine the value of spatial information. but instead misleading noise reduction results . We found that the local feature is not always a guide for noise reduction. This finding leads to a new method to measure the image edges in focus deblurring. We employed an all-in-focus measure in the frequency domain, based on the energy level of frequency components. We also used a multi-circle enhancement model to analyze this spatial information to provide a more accurate method for measuring images. We compared our results with those using other methods in similar studies. Findings demonstrate the effectiveness of our new method.",
"title": ""
},
{
"docid": "5c0f462ef605581c8ec52acf287b1127",
"text": "This paper presents SVF, a tool that enables scalable and precise interprocedural Static Value-Flow analysis for C programs by leveraging recent advances in sparse analysis. SVF, which is fully implemented in LLVM, allows value-flow construction and pointer analysis to be performed in an iterative manner, thereby providing increasingly improved precision for both. SVF accepts points- to information generated by any pointer analysis (e.g., Andersen’s analysis) and constructs an interprocedural memory SSA form, in which the def-use chains of both top-level and address-taken variables are captured. Such value-flows can be subsequently exploited to support various forms of program analysis or enable more precise pointer analysis (e.g., flow-sensitive analysis) to be performed sparsely. By dividing a pointer analysis into three loosely coupled components: Graph, Rules and Solver, SVF provides an extensible interface for users to write their own solutions easily. SVF is publicly available at http://unsw-corg.github.io/SVF.",
"title": ""
},
{
"docid": "824b0e8a66699965899169738df7caa9",
"text": "Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.",
"title": ""
},
{
"docid": "2dcfeee8dc8607578e00c8e73079f4ec",
"text": "A new algorithm is proposed for efficient stereo and novel view synthesis. Given the video streams acquired by two synchronized cameras the proposed algorithm synthesises images from a virtual camera in arbitrary position near the physical cameras. The new technique is based on an improved, dynamic-programming, stereo algorithm for efficient novel view generation. The two main contributions of this paper are: (i) a new four state matching graph for dense stereo dynamic programming, that supports accurate occlusion labelling; (ii) a compact geometric derivation for novel view synthesis by direct projection of the minimum cost surface. Furthermore, the paper presents an algorithm for the temporal maintenance of a background model to enhance the rendering of occlusions and reduce temporal artefacts (flicker); and a cost aggregation algorithm that acts directly in the three-dimensional matching cost space. The proposed algorithm has been designed to work with input images with large disparity range, a common practical situation. The enhanced occlusion handling capabilities of the new dynamic programming algorithm are evaluated against those of the most powerful state-of-the-art dynamic programming and graph-cut techniques. Four-state DP is also evaluated against the disparity-based Middlebury error metrics and its performance found to be amongst the best of the efficient algorithms. A number of examples demonstrate the robustness of four-state DP to artefacts in stereo video streams. This includes demonstrations of cyclopean view synthesis in extended conversational sequences, synthesis from a freely translating virtual camera and, finally, basic 3D scene editing.",
"title": ""
},
{
"docid": "5673fc81ba9a1d26531bcf7a1572e873",
"text": "Spatio-temporal channel information obtained via channel sounding is invaluable for implementing equalizers, multi-antenna systems, and dynamic modulation schemes in next-generation wireless systems. The most straightforward means of performing channel measurements is in the frequency domain using a vector network analyzer (VNA). However, the high cost of VNAs often leads engineers to seek more economical solutions by measuring the wireless channel in the time domain. The bandwidth compression of the sliding correlator channel sounder makes it the preferred means of performing time-domain channel measurements.",
"title": ""
},
{
"docid": "a73da9191651ae5d0330d6f64f838f67",
"text": "Language selection (or control) refers to the cognitive mechanism that controls which language to use at a given moment and context. It allows bilinguals to selectively communicate in one target language while minimizing the interferences from the nontarget language. Previous studies have suggested the participation in language control of different brain areas. However, the question remains whether the selection of one language among others relies on a language-specific neural module or general executive regions that also allow switching between different competing behavioral responses including the switching between various linguistic registers. In this functional magnetic resonance imaging study, we investigated the neural correlates of language selection processes in German-French bilingual subjects during picture naming in different monolingual and bilingual selection contexts. We show that naming in the first language in the bilingual context (compared with monolingual contexts) increased activation in the left caudate and anterior cingulate cortex. Furthermore, the activation of these areas is even more extended when the subjects are using a second weaker language. These findings show that language control processes engaged in contexts during which both languages must remain active recruit the left caudate and the anterior cingulate cortex (ACC) in a manner that can be distinguished from areas engaged in intralanguage task switching.",
"title": ""
},
{
"docid": "2c289744ea8ae9d8f0c6ce4ba356b6cb",
"text": "The mission of the IPTS is to provide customer-driven support to the EU policy-making process by researching science-based responses to policy challenges that have both a socioeconomic and a scientific or technological dimension. Legal Notice Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. (*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.",
"title": ""
},
{
"docid": "7ca43cfa9af9e40a5b53c60a2b2fb67f",
"text": "In this paper we have proposed a control technique for the automatic generation control of multi generating power unit of the interconnected power system. This technique established the relationship between the economic load dispatch and load forecasting mechanism to the classical concepts of the load frequency control (LFC). The LFC system monitors to keep the power system frequency at nominal value, generator output according to the load demand and net interchange scheduled tie line power flows within prescribed limit among the different control area of the power system. Due to relatively fast area load demand fluctuations and accordingly slow response of instantaneous estimate of area control error (ACE), we need some load forecasting technique for better dynamic system response as well as improved & effective load frequency control to the power system. Load prediction technique has been accomplished using the klaman filter prediction recursive algorithms and a bank of hourly predicted load data is obtained and then the concepts of 5 minute look ahead forecasting technique is applied and finally total load is shared among the different generating units according to the calculation of economic load dispatch via participation factor’s. Results and Discussion section of this paper of simulated interconnected system’s graphs support this new technique wisely.",
"title": ""
},
{
"docid": "d7c236983c54213f17a0d8db886d5f2f",
"text": "Traffic light detection is an important system because it can alert driver on upcoming traffic light so that he/she can anticipate a head of time. In this paper we described our work on detecting traffic light color using machine learning approach. Using HSV color representation, our approach is to extract features based on an area of X×X pixels. Traffic light color model is then created by applying a learning algorithm on a set of examples of features representing pixels of traffic and non-traffic light colors. The learned model is then used to classify whether an area of pixels contains traffic light color or not. Evaluation of this approach reveals that it significantly improves the detection performance over the one based on value-range color segmentation technique.",
"title": ""
},
{
"docid": "4b18d2665f1bc6e9576237d88e15c74e",
"text": "Beta Regression, an extension of generalized linear models, can estimate the effect of explanatory variables on data falling within the (0,1) interval. Recent developments in Beta Regression theory extend the support interval to now include 0 and 1. The %Beta_Regression macro is updated to now allow for Zero-One Inflated Beta Regression.",
"title": ""
},
{
"docid": "324d5ad29582bc7924fa80d77f0b6c0d",
"text": "We propose a method to design linear deformation subspaces, unifying linear blend skinning and generalized barycentric coordinates. Deformation subspaces cut down the time complexity of variational shape deformation methods and physics-based animation (reduced-order physics). Our subspaces feature many desirable properties: interpolation, smoothness, shape-awareness, locality, and both constant and linear precision. We achieve these by minimizing a quadratic deformation energy, built via a discrete Laplacian inducing linear precision on the domain boundary. Our main advantage is speed: subspace bases are solutions to a sparse linear system, computed interactively even for generously tessellated domains. Users may seamlessly switch between applying transformations at handles and editing the subspace by adding, removing or relocating control handles. The combination of fast computation and good properties means that designing the right subspace is now just as creative as manipulating handles. This paradigm shift in handle-based deformation opens new opportunities to explore the space of shape deformations.",
"title": ""
},
{
"docid": "cdb87a9db48b78e193d9229282bd3b67",
"text": "While large-scale automatic grading of student programs for correctness is widespread, less effort has focused on automating feedback for good programming style:} the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent. We hypothesize that with a large enough (MOOC-sized) corpus of submissions to a given programming problem, we can observe a range of stylistic mastery from naïve to expert, and many points in between, and that we can exploit this continuum to automatically provide hints to learners for improving their code style based on the key stylistic differences between a given learner's submission and a submission that is stylistically slightly better. We are developing a methodology for analyzing and doing feature engineering on differences between submissions, and for learning from instructor-provided feedback as to which hints are most relevant. We describe the techniques used to do this in our prototype, which will be deployed in a residential software engineering course as an alpha test prior to deploying in a MOOC later this year.",
"title": ""
},
{
"docid": "4bba56323edd0d2bc1baca07c1cee14e",
"text": "In this paper, we propose Personalized Markov Embedding (PME), a next-song recommendation strategy for online karaoke users. By modeling the sequential singing behavior, we first embed songs and users into a Euclidean space in which distances between songs and users reflect the strength of their relationships. Then, given each user's last song, we can generate personalized recommendations by ranking the candidate songs according to the embedding. Moreover, PME can be trained without any requirement of content information. Finally, we perform an experimental evaluation on a real world data set provided by ihou.com which is an online karaoke website launched by iFLYTEK, and the results clearly demonstrate the effectiveness of PME.",
"title": ""
},
{
"docid": "9715187b184dc900c7b6f35fd6091665",
"text": "In order to recover and fully charge batteries in electric vehicles, smart battery chargers should not only work under different loading conditions and output voltage regulations (close to zero to 1.5 times the nominal output voltage), but also provide a ripple-free charging current for battery packs and a noise-free environment for the battery management system (BMS). In this paper, an advanced LLC design procedure is investigated to provide advantageous extreme regulation and eliminate detrimental burst mode operation. A modified, special LLC tank driven by both variable frequency and phase shift proves to be a successful solution to achieve all the regulation requirements for battery charging (from recovery, bulk, equalization, to finish). The proposed solution can eliminate the negative impact of burst mode noises on the BMS, provide a ripple-free charging current for batteries in different states of charge, reduce the switching frequency variation, and facilitate the EMI filter and magnetic components designs procedure. In order to fully consider the characteristics of the full bridge LLC resonant converter, especially the output voltage regulation range and soft transitions of the MOSFETs in the fixed frequency phase shift mode, a new set of analytical equations is obtained for the LLC resonant converter with consideration of separated primary and secondary leakage inductances of the high frequency transformer. Based on the proposed strategy and analytical equations, multivariate statistical design methodology is employed to design and optimize a 120 VDC, 3-kW battery charger. The experimental results exhibit the excellent performance of the resulting converter, which has a peak efficiency of 96.5% with extreme regulation capability.",
"title": ""
},
{
"docid": "ff418efbdd2381692f01b5cdc94143d5",
"text": "The U.S. legislation at both the federal and state levels mandates certain organizations to inform customers about information uses and disclosures. Such disclosures are typically accomplished through privacy policies, both online and offline. Unfortunately, the policies are not easy to comprehend, and, as a result, online consumers frequently do not read the policies provided at healthcare Web sites. Because these policies are often required by law, they should be clear so that consumers are likely to read them and to ensure that consumers can comprehend these policies. This, in turn, may increase consumer trust and encourage consumers to feel more comfortable when interacting with online organizations. In this paper, we present results of an empirical study, involving 993 Internet users, which compared various ways to present privacy policy information to online consumers. Our findings suggest that users perceive typical, paragraph-form policies to be more secure than other forms of policy representation, yet user comprehension of such paragraph-form policies is poor as compared to other policy representations. The results of this study can help managers create more trustworthy policies, aid compliance officers in detecting deceptive organizations, and serve legislative bodies by providing tangible evidence as to the ineffectiveness of current privacy policies.",
"title": ""
},
{
"docid": "459cf6ad034a332a44815a922e21b27f",
"text": "Distributed Denial of Service (DDoS) attacks generate enormous packets by a large number of agents and can easily exhaust the computing and communication resources of a victim within a short period of time. In this paper, we propose a method for proactive detection of DDoS attack by exploiting its architecture which consists of the selection of handlers and agents, the communication and compromise, and attack. We look into the procedures of DDoS attack and then select variables based on these features. After that, we perform cluster analysis for proactive detection of the attack. We experiment with 2000 DARPA Intrusion Detection Scenario Specific Data Set in order to evaluate our method. The results show that each phase of the attack scenario is partitioned well and we can detect precursors of DDoS attack as well as the attack itself. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "36a1e7716d6cdac89911ca0b52c019ff",
"text": "Some recent sequence-to-sequence models like the Transformer (Vaswani et al., 2017) can score all output posiQons in parallel. We propose a simple algorithmic technique that exploits this property to generate mulQple tokens in parallel at decoding Qme with liTle to no loss in quality. Our fastest models exhibit wall-clock speedups of up to 4x over standard greedy decoding on the tasks of machine translaQon and image super-resoluQon.",
"title": ""
},
{
"docid": "c99b283a66d3f9a6afeaf2c74338937c",
"text": "If we want to describe the action of someone who is looking out a window for an extended time, how do we choose between the words gazing, staring, and peering? What exactly is the difference between an rgument, a dispute, and a row? In this paper, we describe our research in progress on the problem of lexical choice and the representations of world knowledge and of lexical structure and meaning that the task requires. In particular, we wish to deal with nuances and subtleties of denotation and connotation--shades of meaning and of style--such as those illustrated by the examples above. We are studying the task in two related contexts: machine translation, and the generation of multilingual text from a single representation of content. This work brings together several elements of our earlier research: unilingual lexical choice (Miezitis 1988); multilingual generation (R6sner and Stede 1992a,b); representing and preserving stylistic nuances in translation (DiMarco 1990; DiMarco and Hirst 1990; Mah 1991); and, more generally, analyzing and generating stylistic nuances in text (DiMarco and Hirst 1993; DiMarco et al 1992; MakutaGiluk 1991; Maknta-Giluk and DiMarco 1993; BenHassine 1992; Green 1992a,b, 1993; Hoyt forthcoming). In the present paper, we concentrate on issues in lexical representation. We describe a methodology, based on dictionary usage notes, that we are using to discover the dimensions along which similar words can be differentiated, and we discuss a two-part representation for lexical differentiation. (Our related work on lexical choice itself and its integration with other components of text generation is discussed by Stede (1993a,b, forthcoming).) aspects of their usage. 1 Such differences can include the collocational constraints of the words (e.g., groundhog and woodchuck denote the same set of animals; yet Groundhog Day, * Woodchuck Day) and the stylistic and interpersonal connotations of the words (e.g., die, pass away, snuff it; slim, skinny; police oI~icer, cop, pig). In addition, many groups of words are plesionyms (Cruse 1986)--that is, nearly synonymous; forest and woods, for example, or stared and gazed, or the German words einschrauben, festschrauben, and festziehen. ~ The notions of synonymy and plesionymy can be made more precise by means of a notion of semantic distance (such as that invoked by Hirst (1987), for example, lexical disambiguation); but this is troublesome to formalize satisfactorily. In this paper it will suffice to rely on an intuitive understanding. We consider two dimensions along which words can vary: semantic and stylistic, or, equivalently, denotative and connotative. If two words differ semantically (e.g., mist, fog), then substituting one for the other in a sentence or discourse will not necessarily preserve truth conditions; the denotations are not identical. If two words differ (solely) in stylistic features (e.g., frugal, stingy), then intersubstitution does preserve truth conditions, but the connotation--the stylistic and interpersonal effect of the sentence--is changed, s Many of the semantic distinctions between plesionyms do not lend themselves to neat, taxonomic differentiation; rather, they are fuzzy, with plesionyms often having an area of overlap. For example, the boundary between forest and wood ’tract of trees’ is vague, and there are some situations in which either word might be equally appropriate. 4",
"title": ""
}
] |
scidocsrr
|
ca4c6ab34b338b397d873f2b99443095
|
Automatic Lymphocyte Detection in H&E Images with Deep Neural Networks
|
[
{
"docid": "b27abf20ae53f963f54e8b5aab03213c",
"text": "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.",
"title": ""
}
] |
[
{
"docid": "f052fae696370910cc59f48552ddd889",
"text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.",
"title": ""
},
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "363dc30dbf42d5309366ec109c445c48",
"text": "There has been significant recent interest in fast imaging with sparse sampling. Conventional imaging methods are based on Shannon-Nyquist sampling theory. As such, the number of required samples often increases exponentially with the dimensionality of the image, which limits achievable resolution in high-dimensional scenarios. The partially-separable function (PSF) model has previously been proposed to enable sparse data sampling in this context. Existing methods to leverage PSF structure utilize tailored data sampling strategies, which enable a specialized two-step reconstruction procedure. This work formulates the PSF reconstruction problem using the matrix-recovery framework. The explicit matrix formulation provides new opportunities for data acquisition and image reconstruction with rank constraints. Theoretical results from the emerging field of low-rank matrix recovery (which generalizes theory from sparse-vector recovery) and our empirical results illustrate the potential of this new approach.",
"title": ""
},
{
"docid": "10d8bbea398444a3fb6e09c4def01172",
"text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.",
"title": ""
},
{
"docid": "849fc01a49b95b4ef6248f6cf1b89639",
"text": "State-based testing is frequently used in software testing. Test data generation is one of the key issues in software testing. A properly generated test suite may not only locate the errors in a software system, but also help in reducing the high cost associated with software testing. It is often desired that test data in the form of test sequences within a test suite can be automatically generated to achieve required test coverage. This paper proposes an Ant Colony Optimization approach to test data generation for the state-based software testing. Keywords— Software testing, ant colony optimization, UML.",
"title": ""
},
{
"docid": "f70b85ef3d070d1d342a958f5d94fb72",
"text": "A joint torque estimation technique utilizing the existing structural elasticity of robotic joints with harmonic drive transmission is proposed in this paper. Joint torque sensing is one of the key techniques for achieving high-performance robot control, especially for robots working in unstructured environments. The proposed joint torque estimation technique uses link-side position measurement along with a proposed harmonic derive model to realize stiff and sensitive torque estimation. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor, and the results have attested the effectiveness of the proposed torque estimation technique.",
"title": ""
},
{
"docid": "f24bba45a1905cd4658d52bc7e9ee046",
"text": "In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.",
"title": ""
},
{
"docid": "59291cb1c13ab274f06b619698784e23",
"text": "We present a new class of Byzantine-tolerant State Machine Replication protocols for asynchronous environments that we term Byzantine Chain Replication. We demonstrate two implementations that present different trade-offs between performance and security, and compare these with related work. Leveraging an external reconfiguration service, these protocols are not based on Byzantine consensus, do not require majoritybased quorums during normal operation, and the set of replicas is easy to reconfigure. One of the implementations is instantiated with t+ 1 replicas to tolerate t failures and is useful in situations where perimeter security makes malicious attacks unlikely. Applied to in-memory BerkeleyDB replication, it supports 20,000 transactions per second while a fully Byzantine implementation supports 12,000 transactions per second—about 70% of the throughput of a non-replicated database.",
"title": ""
},
{
"docid": "c399a885345466505cfbaf8c175533b7",
"text": "Science is going through two rapidly changing phenomena: one is the increasing capabilities of the computers and software tools from terabytes to petabytes and beyond, and the other is the advancement in high-throughput molecular biology producing piles of data related to genomes, transcriptomes, proteomes, metabolomes, interactomes, and so on. Biology has become a data intensive science and as a consequence biology and computer science have become complementary to each other bridged by other branches of science such as statistics, mathematics, physics, and chemistry. The combination of versatile knowledge has caused the advent of big-data biology, network biology, and other new branches of biology. Network biology for instance facilitates the system-level understanding of the cell or cellular components and subprocesses. It is often also referred to as systems biology. The purpose of this field is to understand organisms or cells as a whole at various levels of functions and mechanisms. Systems biology is now facing the challenges of analyzing big molecular biological data and huge biological networks. This review gives an overview of the progress in big-data biology, and data handling and also introduces some applications of networks and multivariate analysis in systems biology.",
"title": ""
},
{
"docid": "1c3a87fd2e10a9799e7c0a79be635816",
"text": "According to Network Effect literature network externalities lead to market failure due to Pareto-inferior coordination results. We show that the assumptions and simplifications implicitly used for modeling standardization processes fail to explain the real-world variety of diffusion courses in today’s dynamic IT markets and derive requirements for a more general model of network effects. We argue that Agent-based Computational Economics provides a solid basis for meeting these requirements by integrating evolutionary models from Game Theory and Institutional Economics.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "38f85a10e8f8b815974f5e42386b1fa3",
"text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.",
"title": ""
},
{
"docid": "d67a93dde102bdcd2dd1a72c80aacd6b",
"text": "Network intrusion detection systems have become a standard component in security infrastructures. Unfortunately, current systems are poor at detecting novel attacks without an unacceptable level of false alarms. We propose that the solution to this problem is the application of an ensemble of data mining techniques which can be applied to network connection data in an offline environment, augmenting existing real-time sensors. In this paper, we expand on our motivation, particularly with regard to running in an offline environment, and our interest in multisensor and multimethod correlation. We then review existing systems, from commercial systems, to research based intrusion detection systems. Next we survey the state of the art in the area. Standard datasets and feature extraction turned out to be more important than we had initially anticipated, so each can be found under its own heading. Next, we review the actual data mining methods that have been proposed or implemented. We conclude by summarizing the open problems in this area and proposing a new research project to answer some of these open problems.",
"title": ""
},
{
"docid": "355d7eaf0841a939aa6bef1ceced1187",
"text": "Volcanic eruptions are an important natural cause of climate change on many timescales. A new capability to predict the climatic response to a large tropical eruption for the succeeding 2 years will prove valuable to society. In addition, to detect and attribute anthropogenic influences on climate, including effects of greenhouse gases, aerosols, and ozone-depleting chemicals, it is crucial to quantify the natural fluctuations so as to separate them from anthropogenic fluctuations in the climate record. Studying the responses of climate to volcanic eruptions also helps us to better understand important radiative and dynamical processes that respond in the climate system to both natural and anthropogenic forcings. Furthermore, modeling the effects of volcanic eruptions helps us to improve climate models that are needed to study anthropogenic effects. Large volcanic eruptions inject sulfur gases into the stratosphere, which convert to sulfate aerosols with an e-folding residence time of about 1 year. Large ash particles fall out much quicker. The radiative and chemical effects of this aerosol cloud produce responses in the climate system. By scattering some solar radiation back to space, the aerosols cool the surface, but by absorbing both solar and terrestrial radiation, the aerosol layer heats the stratosphere. For a tropical eruption this heating is larger in the tropics than in the high latitudes, producing an enhanced pole-to-equator temperature gradient, especially in winter. In the Northern Hemisphere winter this enhanced gradient produces a stronger polar vortex, and this stronger jet stream produces a characteristic stationary wave pattern of tropospheric circulation, resulting in winter warming of Northern Hemisphere continents. This indirect advective effect on temperature is stronger than the radiative cooling effect that dominates at lower latitudes and in the summer. The volcanic aerosols also serve as surfaces for heterogeneous chemical reactions that destroy stratospheric ozone, which lowers ultraviolet absorption and reduces the radiative heating in the lower stratosphere, but the net effect is still heating. Because this chemical effect depends on the presence of anthropogenic chlorine, it has only become important in recent decades. For a few days after an eruption the amplitude of the diurnal cycle of surface air temperature is reduced under the cloud. On a much longer timescale, volcanic effects played a large role in interdecadal climate change of the Little Ice Age. There is no perfect index of past volcanism, but more ice cores from Greenland and Antarctica will improve the record. There is no evidence that volcanic eruptions produce El Niño events, but the climatic effects of El Niño and volcanic eruptions must be separated to understand the climatic response to each.",
"title": ""
},
{
"docid": "87748c1fc9dc379c2225c92d2218e278",
"text": "If components (denoted by horizontal and vertical axis in Figure 2a) are correlated, then samples (points in Figure 2a) are in a non-spherical shape, then eigenvalues are mutually different. Hence correlation leads to non-uniformity of eigenvalues. Since the eigenvectors are orthogonal by design, it suffices to focus on eigenvalues only. To reduce correlation, we encourage the eigenvalues to be uniform (Figure 2b). Rotation does not affect eigenvalues or uncorrelation. For a component matrix A and rotation matrix R, A>A equals to A>R>RA and they have the same eigendecomposition (say UEU>). Ensuring the eigenvalue matrix E is close to identity implies the latent components are rotations of the orthonormal (and hence uncorrelated) eigenvectors.",
"title": ""
},
{
"docid": "0caa6d4623fb0414facb76ccd8eaa235",
"text": "Because of large amounts of unstructured text data generated on the Internet, text mining is believed to have high commercial value. Text mining is the process of extracting previously unknown, understandable, potential and practical patterns or knowledge from the collection of text data. This paper introduces the research status of text mining. Then several general models are described to know text mining in the overall perspective. At last we classify text mining work as text categorization, text clustering, association rule extraction and trend analysis according to applications.",
"title": ""
},
{
"docid": "62388d506e6f9b500a6395ae82543d57",
"text": "BACKGROUND\nThe shapes of the eyebrow and upper eyelid are distinctive facial landmarks. In cosmetic and reconstructive procedures, maintenance of the anatomical relations of these landmarks ensures a pleasing postoperative appearance.\n\n\nOBJECTIVES\nThe authors establish normal values for eyelid anthropometry in an Indian population.\n\n\nMETHOD\nThis prospective study included 216 patients between the ages of 16 and 60 years, divided into three groups by age (Groups A to C: 16 to 30 years, 31 to 45 years, 46 to 60 years, respectively) and sex. All patients were photographed from a frontal view, with measurements taken from these photographs. Parameters included the distance between the medial canthus and the lateral canthus (ie, the width of the palpebral fissure), the distance between the open upper eyelid margin and the lower eyelid margin, (ie, the vertical dimension of the palpebral fissure), the intercanthal distance, the interpupillary distance, and the height of the open upper lid. All measured values were analyzed by independent t-test.\n\n\nRESULTS\nThere was a significant increase in palpebral fissure from Group B to Group C. A significant increase was also observed in intercanthal distance as age progressed beyond 45 years. There was a significant decrease in the interpupillary distance as age increased-from Group A to Group B and from Group B to Group C-and a similar increase in eyelid height in that age progression.\n\n\nCONCLUSIONS\nThe anatomy of the Indian population is distinct in that the palpebral fissure in men is less than that in women. It appears that changes in the eye become more pronounced after 45 years, including an increase in palpebral fissure, intercanthal distance, and height of the upper lid, along with a decrease in interpupillary distance.",
"title": ""
},
{
"docid": "39710768ed8ec899e412cccae7e7d262",
"text": "Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an arbitrary learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.",
"title": ""
},
{
"docid": "f5bb79e1f4d7ee7a23f9841078971d1c",
"text": "In the present paper we describe TectoMT, a multi-purpose open-source NLP framework. It allows for fast and efficient development of NLP applications by exploiting a wide range of software modules already integrated in TectoMT, such as tools for sentence segmentation, tokenization, morphological analysis, POS tagging, shallow and deep syntax parsing, named entity recognition, anaphora resolution, tree-to-tree translation, natural language generation, word-level alignment of parallel corpora, and other tasks. One of the most complex applications of TectoMT is the English-Czech machine translation system with transfer on deep syntactic (tectogrammatical) layer. Several modules are available also for other languages (German, Russian, Arabic). Where possible, modules are implemented in a language-independent way, so they can be reused in many applications.",
"title": ""
},
{
"docid": "11f404d45daeb02087383b9ea933457c",
"text": "Distributed Denial of Service (DDoS) flooding attacks are one of the biggest concerns for security professionals. DDoS flooding attacks are typically explicit attempts to disrupt legitimate users' access to services. Attackers usually gain access to a large number of computers by exploiting their vulnerabilities to set up attack armies (i.e., Botnets). Once an attack army has been set up, an attacker can invoke a coordinated, large-scale attack against one or more targets. Developing a comprehensive defense mechanism against identified and anticipated DDoS flooding attacks is a desired goal of the intrusion detection and prevention research community. However, the development of such a mechanism requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various DDoS flooding attacks. In this paper, we explore the scope of the DDoS flooding attack problem and attempts to combat it. We categorize the DDoS flooding attacks and classify existing countermeasures based on where and when they prevent, detect, and respond to the DDoS flooding attacks. Moreover, we highlight the need for a comprehensive distributed and collaborative defense approach. Our primary intention for this work is to stimulate the research community into developing creative, effective, efficient, and comprehensive prevention, detection, and response mechanisms that address the DDoS flooding problem before, during and after an actual attack.",
"title": ""
}
] |
scidocsrr
|
d77f3fabbf9924d1baecc318e4a62e1e
|
The Wisdom of Nature : An Evolutionary Heuristic for Human Enhancement
|
[
{
"docid": "8adb07a99940383139f0d4ed32f68f7c",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
}
] |
[
{
"docid": "79c331cf08ebecf8de5809dfd6ab74d9",
"text": "Geographical Information System (GIS) and Global Positioning System (GPS) technologies are expanding their traditional applications to embrace a stream of consumer-focused, location-based applications. Through an integration with handheld devices capable of wireless communication and mobile computing, a wide range of what might be generically referred to as \"Location-Based Services\" (LBS) may be offered to mobile users. A location-based service is able to provide targetted spatial information to mobile workers and consumers. These include utility location information, personal or asset tracking, concierge and routeguidance information, to name just a few of the possible LBS. The technologies and applications of LBS will play an ever increasingly important role in the modern, mobile, always-connected society. This paper endeavours to provide some background to the technology underlying location-based services and to discuss some issues related to developing and launching LBS. These include whether wireless mobile technologies are ready to support LBS, which mobile positioning technologies can be used and what are their shortcomings, and how GIS developers manipulate spatial information to generate appropriate map images on mobile devices (such as cell phones and PDAs). In addition the authors discuss such issues as interoperability, privacy protection and the market demand for LBS.",
"title": ""
},
{
"docid": "5e9cc7e7933f85b6cffe103c074105d4",
"text": "Substrate-integrated waveguides (SIWs) maintain the advantages of planar circuits (low loss, low profile, easy manufacturing, and integration in a planar circuit board) and improve the quality factor of filter resonators. Empty substrate-integrated waveguides (ESIWs) substantially reduce the insertion losses, because waves propagate through air instead of a lossy dielectric. The first ESIW used a simple tapering transition that cannot be used for thin substrates. A new transition has recently been proposed, which includes a taper also in the microstrip line, not only inside the ESIW, and so it can be used for all substrates, although measured return losses are only 13 dB. In this letter, the cited transition is improved by placing via holes that prevent undesired radiation, as well as two holes that help to ensure good accuracy in the mechanization of the input iris, thus allowing very good return losses (over 20 dB) in the measured results. A design procedure that allows the successful design of the proposed new transition is also provided. A back-to-back configuration of the improved new transition has been successfully manufactured and measured.",
"title": ""
},
{
"docid": "33eeb883ae070fdc1b5a1eb656bce6b9",
"text": "Traffic Congestion is one of many serious global problems in all great cities resulted from rapid urbanization which always exert negative externalities upon society. The solution of traffic congestion is highly geocentric and due to its heterogeneous nature, curbing congestion is one of the hard tasks for transport planners. It is not possible to suggest unique traffic congestion management framework which could be absolutely applied for every great cities. Conversely, it is quite feasible to develop a framework which could be used with or without minor adjustment to deal with congestion problem. So, the main aim of this paper is to prepare a traffic congestion mitigation framework which will be useful for urban planners, transport planners, civil engineers, transport policy makers, congestion management researchers who are directly or indirectly involved or willing to involve in the task of traffic congestion management. Literature review is the main source of information of this study. In this paper, firstly, traffic congestion is defined on the theoretical point of view and then the causes of traffic congestion are briefly described. After describing the causes, common management measures, using worldwide, are described and framework for supply side and demand side congestion management measures are prepared.",
"title": ""
},
{
"docid": "cb9f8b5dd48a490a15f9fc78af605b8b",
"text": "A novel online algorithm to segment multiple objects in a video sequence is proposed in this work. We develop the collaborative detection, tracking, and segmentation (CDTS) technique to extract multiple segment tracks accurately. First, we jointly use object detector and tracker to generate multiple bounding box tracks for objects. Second, we transform each bounding box into a pixel-wise segment, by employing the alternate shrinking and expansion (ASE) segmentation. Third, we refine the segment tracks, by detecting object disappearance and reappearance cases and merging overlapping segment tracks. Experimental results show that the proposed algorithm significantly surpasses the state-of-the-art conventional algorithms on benchmark datasets.",
"title": ""
},
{
"docid": "a7623185df940b128af6187d7d1e0b9c",
"text": "Inflammasomes are high-molecular-weight protein complexes that are formed in the cytosolic compartment in response to danger- or pathogen-associated molecular patterns. These complexes enable activation of an inflammatory protease caspase-1, leading to a cell death process called pyroptosis and to proteolytic cleavage and release of pro-inflammatory cytokines interleukin (IL)-1β and IL-18. Along with caspase-1, inflammasome components include an adaptor protein, ASC, and a sensor protein, which triggers the inflammasome assembly in response to a danger signal. The inflammasome sensor proteins are pattern recognition receptors belonging either to the NOD-like receptor (NLR) or to the AIM2-like receptor family. While the molecular agonists that induce inflammasome formation by AIM2 and by several other NLRs have been identified, it is not well understood how the NLR family member NLRP3 is activated. Given that NLRP3 activation is relevant to a range of human pathological conditions, significant attempts are being made to elucidate the molecular mechanism of this process. In this review, we summarize the current knowledge on the molecular events that lead to activation of the NLRP3 inflammasome in response to a range of K (+) efflux-inducing danger signals. We also comment on the reported involvement of cytosolic Ca (2+) fluxes on NLRP3 activation. We outline the recent advances in research on the physiological and pharmacological mechanisms of regulation of NLRP3 responses, and we point to several open questions regarding the current model of NLRP3 activation.",
"title": ""
},
{
"docid": "bc4d717db3b3470d7127590b8d165a5d",
"text": "In this paper, we develop a general formalism for describing the C++ programming language, and regular enough to cope with proposed extensions (such as concepts) for C++0x that affect its type system. Concepts are a mechanism for checking template arguments currently being developed to help cope with the massive use of templates in modern C++. The main challenges in developing a formalism for C++ are scoping, overriding, overloading, templates, specialization, and the C heritage exposed in the built-in types. Here, we primarily focus on templates and overloading.",
"title": ""
},
{
"docid": "d1069c06341e484e7f3b5ab7a4a49a2d",
"text": "In a \"nutrition transition\", the consumption of foods high in fats and sweeteners is increasing throughout the developing world. The transition, implicated in the rapid rise of obesity and diet-related chronic diseases worldwide, is rooted in the processes of globalization. Globalization affects the nature of agri-food systems, thereby altering the quantity, type, cost and desirability of foods available for consumption. Understanding the links between globalization and the nutrition transition is therefore necessary to help policy makers develop policies, including food policies, for addressing the global burden of chronic disease. While the subject has been much discussed, tracing the specific pathways between globalization and dietary change remains a challenge. To help address this challenge, this paper explores how one of the central mechanisms of globalization, the integration of the global marketplace, is affecting the specific diet patterns. Focusing on middle-income countries, it highlights the importance of three major processes of market integration: (I) production and trade of agricultural goods; (II) foreign direct investment in food processing and retailing; and (III) global food advertising and promotion. The paper reveals how specific policies implemented to advance the globalization agenda account in part for some recent trends in the global diet. Agricultural production and trade policies have enabled more vegetable oil consumption; policies on foreign direct investment have facilitated higher consumption of highly-processed foods, as has global food marketing. These dietary outcomes also reflect the socioeconomic and cultural context in which these policies are operating. An important finding is that the dynamic, competitive forces unleashed as a result of global market integration facilitates not only convergence in consumption habits (as is commonly assumed in the \"Coca-Colonization\" hypothesis), but adaptation to products targeted at different niche markets. This convergence-divergence duality raises the policy concern that globalization will exacerbate uneven dietary development between rich and poor. As high-income groups in developing countries accrue the benefits of a more dynamic marketplace, lower-income groups may well experience convergence towards poor quality obseogenic diets, as observed in western countries. Global economic policies concerning agriculture, trade, investment and marketing affect what the world eats. They are therefore also global food and health policies. Health policy makers should pay greater attention to these policies in order to address some of the structural causes of obesity and diet-related chronic diseases worldwide, especially among the groups of low socioeconomic status.",
"title": ""
},
{
"docid": "51bed6a9474603f79f44ebfc4815f33c",
"text": "The adoption of metamaterials in the development of terahertz (THz) antennas has led to tremendous progresses in the THz field. In this paper, a reconfigurable THz patch antenna based on graphene is presented, whose resonance frequency can be changed depending on the applied voltage. By using an array of split ring resonators (SRR) also made of graphene, both bandwidth and radiation properties are enhanced; it is found that both the resonance frequency and bandwidth change with the applied voltage.",
"title": ""
},
{
"docid": "17bd8497b30045267f77572c9bddb64f",
"text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: cseelos@sscg.org jmair@iese.edu (J. Mair).",
"title": ""
},
{
"docid": "4c2c19b22607c2cc5ba2ebc8ca1c47dc",
"text": "We present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. Next to small objects with shiny and transparent surfaces, the biggest challenge of the 2017 competition was the introduction of unseen categories. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored. One is a deep metric learning approach that works in three separate steps: semantic-agnostic boundary detection, patch classification and pixel-wise voting. The other is a fully-supervised semantic segmentation approach with efficient dataset collection. We conduct an extensive analysis of the two methods on our ARC 2017 dataset. Interestingly, only few examples of each class are sufficient to fine-tune even very deep convolutional neural networks for this specific task.",
"title": ""
},
{
"docid": "6f8e565aff657cbc1b65217d72ead3ab",
"text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.",
"title": ""
},
{
"docid": "0a65c096f91206c868f05bea9acc28fd",
"text": "This paper presents a review on recent developments in BLDC motor controllers and studies on four quadrant operation of BLDC drive along with active PFC. The main areas reviewed include Sensor-less control, Direct Torque Control (DTC), Fuzzy logic control, controller for four quadrant operation and active Power Factor Corrected (PFC) converter fed BLDC motor drive. A comprehensive study has been done on four quadrant operation and active PFC converter fed BLDC motor drive with simulation in MATLAB/SIMULINK. The proposed control algorithm for four quadrant operation detects the speed reversal requirement and changes the quadrant of operation accordingly. In PFC converter fed BLDC motor drive, a Boost converter working in continuous current mode is designed to improve the supply power factor.",
"title": ""
},
{
"docid": "10973f1a045d05084039f05e92578f9a",
"text": "Determination of credit portfolio loss distributions is essential for the valuation and risk management of multi-name credit derivatives such as CDOs. The default time model has recently become a market standard approach for capturing the default correlation, which is one of the main drivers for the portfolio loss. However, the default time model yields very different default dependency compared with a continuous-time credit migration model. To build a connection between them, we calibrate the correlation parameter of a single-factor Gaussian copula model to portfolio loss distribution determined from a multi-step credit migration simulation. The deal correlation is produced as a measure of the portfolio average correlation effect that links the two models. Procedures for obtaining the portfolio loss distributions in both models are described in the paper and numerical results are presented.",
"title": ""
},
{
"docid": "2ea99ae4dd94095e7f758353d35839ca",
"text": "An increasing number of companies rely on distributed data storage and processing over large clusters of commodity machines for critical business decisions. Although plain MapReduce systems provide several benefits, they carry certain limitations that impact developer productivity and optimization opportunities. Higher level programming languages plus conceptual data models have recently emerged to address such limitations. These languages offer a single machine programming abstraction and are able to perform sophisticated query optimization and apply efficient execution strategies. In massively distributed computation, data shuffling is typically the most expensive operation and can lead to serious performance bottlenecks if not done properly. An important optimization opportunity in this environment is that of judicious placement of repartitioning operators and choice of alternative implementations. In this paper we discuss advanced partitioning strategies, their implementation, and how they are integrated in the Microsoft Scope system. We show experimentally that our approach significantly improves performance for a large class of real-world jobs.",
"title": ""
},
{
"docid": "06fca2fd3cdaab1029d447f0e0823184",
"text": "The purpose of the present study was to experimentally assess the effect of cognitive strategies of association and dissociation while running on central nervous activation. A total of 30 long distance runners volunteered for the study. The study protocol consisted on three sessions (scheduled in three different days): (1) maximal incremental treadmill test, (2) associative task session, and (3) dissociative task session. The order of sessions 2 and 3 was counterbalanced. During sessions 2 and 3, participants performed a 55 min treadmill run at moderate intensity. Both, associative and dissociative tasks responses were monitoring and recording in real time through dynamic measure tools. Consequently, was possible to have an objective control of the attentional. Results showed a positive session (exercise+attentional task) effect for central nervous activation. The benefi ts of aerobic exercise at moderate intensity for the performance of self-regulation cognitive tasks are highlighted. The used methodology is proposed as a valid and dynamic option to study cognitions while running in order to overcome the retrospective approach. Research Article",
"title": ""
},
{
"docid": "d5cd7276621f73b7a7c2e2e4bed10e22",
"text": "A large number of algorithms have been proposed for doing feature subset selection. The goal of this paper is to evaluate the quality of feature subsets generated by the various algorithms, and also compare their computational requirements. Our results show that the sequential forward floating selection (SFFS) algorithm, proposed by Pudil et al., dominates the other algorithms tested. This paper also illustrates the dangers of using feature selection in small sample size situations. It gives the results of applying feature selection to land use classification of SAR satellite images using four different texture models. Pooling features derived from different texture models, followed by a feature selection results in a substantial improvement in the classification accuracy. Application of feature selection to classification of handprinted characters illustrates the value of feature selection in reducing the number of features needed for classifier design.",
"title": ""
},
{
"docid": "f0b8a918283eb3238f91e06bc56afa31",
"text": "Here I examine each of the major issues raised by Priem and Butler (this issue) about my 1991 article and subsequent resource-based research. While it turns out that Priem and Butler's direct criticisms oi the 1991 article are unfounded, they do remind resource-based researchers of some important requirements of this kind of research. I also discuss some important issues not raised by Priem and Butler—the resolutions of which will be necessary if a more complete resource-based theory of strategic advantage is to be developed.",
"title": ""
},
{
"docid": "0222814440107fe89c13a790a6a3833e",
"text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.",
"title": ""
},
{
"docid": "1303f7a3ddec79951e1b0e7480cdc04e",
"text": "Despite the availability of many effective antihypertensive drugs, the drug therapy for resistant hypertension remains a prominent problem. Reviews offer only the general recommendations of increasing dosage and adding drugs, offering clinicians little guidance with respect to the specifics of selecting medications and dosages. A simplified decision tree for drug selection that would be effective in most cases is needed. This review proposes such an approach. The approach is mechanism-based, targeting treatment at three hypertensive mechanisms: (1) sodium/volume, (2) the renin-angiotensin system (RAS), and (3) the sympathetic nervous system (SNS). It assumes baseline treatment with a 2-drug combination directed at sodium/volume and the RAS and recommends proceeding with one or both of just two treatment options: (1) strengthening the diuretic regimen, possibly with the addition of spironolactone, and/or (2) adding agents directed at the SNS, usually a β-blocker or combination of an α- and a β-blocker. The review calls for greater research and clinical attention directed to: (1) assessment of clinical clues that can help direct treatment toward either sodium/volume or the SNS, (2) increased recognition of the role of neurogenic (SNS-mediated) hypertension in resistant hypertension, (3) increased recognition of the effective but underutilized combination of α- + β-blockade, and (4) drug pharmacokinetics and dosing.",
"title": ""
},
{
"docid": "df9722b1cbdf217d26c20bd69dc775eb",
"text": "Personal servers are an attractive concept: people carry around a device that takes care of computing, storage and communication on their behalf in a pervasive computing environment. So far personal servers have mainly been considered for accessing personal information. In this paper, we consider personal servers in the context of a digital key system. Digital keys are an interesting alternative to physical keys for mail or good delivery companies whose employees access tens of private buildings every day. We present a digital key system tailored for the current incarnation of personal servers, i.e., a Bluetooth-enabled mobile phone. We describe how to use Bluetooth for this application, we present a simple authentication protocol and we provide a detailed analysis of response time and energy consumption on the mobile phone.",
"title": ""
}
] |
scidocsrr
|
c0e6d3bbd8ee51dc07809ab4eaa5607b
|
Robust lane detection in urban environments
|
[
{
"docid": "261f146b67fd8e13d1ad8c9f6f5a8845",
"text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.",
"title": ""
},
{
"docid": "2ed9db3d174d95e5b97c4fe26ca6c8ac",
"text": "One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road related accidents in 1999 alone, with an estimated cost of US$518 billion [11]. One way of combating this problem is to develop Intelligent Vehicles that are selfaware and act to increase the safety of the transportation system. This paper presents the development and application of a novel multiple-cue visual lane tracking system for research into Intelligent Vehicles (IV). Particle filtering and cue fusion technologies form the basis of the lane tracking system which robustly handles several of the problems faced by previous lane tracking systems such as shadows on the road, unreliable lane markings, dramatic lighting changes and discontinuous changes in road characteristics and types. Experimental results of the lane tracking system running at 15Hz will be discussed, focusing on the particle filter and cue fusion technology used.",
"title": ""
}
] |
[
{
"docid": "d588743b29df9a064275f4d680c80be8",
"text": "This review examines the efficacy and safety of fractional CO2 lasers for the treatment of atrophic scarring secondary to acne vulgaris. We reviewed 20 papers published between 2008 and 2013 that conducted clinical studies using fractional CO2 lasers to treat atrophic scarring. We discuss the prevalence and pathogenesis of acne scarring, as well as the laser mechanism. The histologic findings are included to highlight the ability of these lasers to induce the collagen reorganization and formation that improves scar appearance. We considered the number of treatments and different laser settings to determine which methods achieve optimal outcomes. We noted unique treatment regimens that yielded superior results. An overview of adverse effects is included to identify the most common ones. We concluded that more studies need to be done using uniform treatment parameters and reporting in order to establish which fractional CO2 laser treatment approaches allow for the greatest scar improvement.",
"title": ""
},
{
"docid": "0f563146a4b5db032cbe52d04930e066",
"text": "Clustering problems are central to many knowledge discovery and data mining tasks. However, most existing clustering methods can only work with fixed-dimensional representations of data patterns. In this paper, we study the clustering of data patterns that are represented as sequences or time series possibly of different lengths. We propose a model-based approach to this problem using mixtures of autoregressive moving average (ARMA) models. We derive an expectation-maximization (EM) algorithm for learning the mixing coefficients as well as the parameters of the component models. The algorithm can determine the number of clusters in the data automatically. Experiments were conducted on a number of simulated and real datasets. Results from the experiments show that our method compares favorably with another method recently proposed by others for similar time series clustering problems.",
"title": ""
},
{
"docid": "74bac9b30cb29eb67df0bdc71f3c4583",
"text": "BACKGROUND\nMedical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems.\n\n\nMETHODS\nWe introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.\n\n\nRESULTS\nWe perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients.\n\n\nCONCLUSIONS\nThe predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.",
"title": ""
},
{
"docid": "ec361784976ab4d00b50c89d308a13ad",
"text": "In this paper, we apply text mining and topic modelling to understand public mental health. We focus on identifying common mental health topics across two anonymous social media platforms: Reddit and a mobile journalling/mood-tracking app. Furthermore, we analyze journals from the app to uncover relationships between topics, journal visibility (private vs. visible to other users of the app), and user-labelled sentiment. Our main findings are that 1) anxiety and depression are shared on both platforms; 2) users of the journalling app keep routine topics such as eating private, and these topics rarely appear on Reddit; and 3) sleep was a critical theme on the journalling app and had an unexpectedly negative sentiment.",
"title": ""
},
{
"docid": "491ddda3cf5acf013b99cdb477acfc9e",
"text": "As we outsource more of our decisions and activities to machines with various degrees of autonomy, the question of clarifying the moral and legal status of their autonomous behaviour arises. There is also an ongoing discussion on whether artificial agents can ever be liable for their actions or become moral agents. Both in law and ethics, the concept of liability is tightly connected with the concept of ability. But as we work to develop moral machines, we also push the boundaries of existing categories of ethical competency and autonomy. This makes the question of responsibility particularly difficult. Although new classification schemes for ethical behaviour and autonomy have been discussed, these need to be worked out in far more detail. Here we address some issues with existing proposals, highlighting especially the link between ethical competency and autonomy, and the problem of anchoring classifications in an operational understanding of what we mean by a moral",
"title": ""
},
{
"docid": "972be3022e7123be919d9491a6dafe1c",
"text": "An improved coaxial high-voltage vacuum insulator applied in a Tesla-type generator, model TPG700, has been designed and tested for high-power microwave (HPM) generation. The design improvements include: changing the connection type of the insulator to the conductors from insertion to tangential, making the insulator thickness uniform, and using Nylon as the insulation material. Transient field simulation shows that the electric field (E-field) distribution within the improved insulator is much more uniform and that the average E-field on the two insulator surfaces is decreased by approximately 30% compared with the previous insulator at a voltage of 700 kV. Key structures such as the anode and the cathode shielding rings of the insulator have been optimized to significantly reduce E-field stresses. Aging experiments and experiments for HPM generation with this insulator were conducted based on a relativistic backward-wave oscillator. The preliminary test results show that the output voltage is larger than 700 kV and the HPM power is about 1 GW. Measurements show that the insulator is well within allowable E-field stresses on both the vacuum insulator surface and the cathode shielding ring.",
"title": ""
},
{
"docid": "ea33b26333eaa1d92f3c42688eb8aba5",
"text": "Code to implement network protocols can be either inside the kernel of an operating system or in user-level processes. Kernel-resident code is hard to develop, debug, and maintain, but user-level implementations typically incur significant overhead and perform poorly.\nThe performance of user-level network code depends on the mechanism used to demultiplex received packets. Demultiplexing in a user-level process increases the rate of context switches and system calls, resulting in poor performance. Demultiplexing in the kernel eliminates unnecessary overhead.\nThis paper describes the packet filter, a kernel-resident, protocol-independent packet demultiplexer. Individual user processes have great flexibility in selecting which packets they will receive. Protocol implementations using the packet filter perform quite well, and have been in production use for several years.",
"title": ""
},
{
"docid": "654f50ccb20720fdb49a2326ae014ba9",
"text": "OBJECTIVE\nThis study was undertaken to describe the distribution of pelvic organ support stages in a population of women seen at outpatient gynecology clinics for routine gynecologic health care.\n\n\nSTUDY DESIGN\nThis was an observational study. Women seen for routine gynecologic health care at four outpatient gynecology clinics were recruited to participate. After informed consent was obtained general biographic data were collected regarding obstetric history, medical history, and surgical history. Women then underwent a pelvic examination. Pelvic organ support was measured and described according to the pelvic organ prolapse quantification system. Stages of support were evaluated by variable for trends with Pearson chi(2) statistics.\n\n\nRESULTS\nA total of 497 women were examined. The average age was 44 years, with a range of 18 to 82 years. The overall distribution of pelvic organ prolapse quantification system stages was as follows: stage 0, 6.4%; stage 1, 43.3%; stage 2, 47.7%; and stage 3, 2.6%. No subjects examined had pelvic organ prolapse quantification system stage 4 prolapse. Variables with a statistically significant trend toward increased pelvic organ prolapse quantification system stage were advancing age, increasing gravidity and parity, increasing number of vaginal births, delivery of a macrosomic infant, history of hysterectomy or pelvic organ prolapse operations, postmenopausal status, and hypertension.\n\n\nCONCLUSION\nThe distribution of the pelvic organ prolapse quantification system stages in the population revealed a bell-shaped curve, with most subjects having stage 1 or 2 support. Few subjects had either stage 0 (excellent support) or stage 3 (moderate to severe pelvic support defects) results. There was a statistically significant trend toward increased pelvic organ prolapse quantification system stage of support among women with many of the historically quoted etiologic factors for the development of pelvic organ prolapse.",
"title": ""
},
{
"docid": "367268c67657a43d1b981347e8175153",
"text": "In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.",
"title": ""
},
{
"docid": "91f8e39777636124d449d1f2829f47de",
"text": "We propose CAEMSI, a cross-domain analytic evaluation methodology for Style Imitation (SI) systems, based on a set of statistical significance tests that allow hypotheses comparing two corpora to be tested. Typically, SI systems are evaluated using human participants, however, this type of approach has several weaknesses. For humans to provide reliable assessments of an SI system, they must possess a sufficient degree of domain knowledge, which can place significant limitations on the pool of participants. Furthermore, both human bias against computer-generated artifacts, and the variability of participants’ assessments call the reliability of the results into question. Most importantly, the use of human participants places limitations on the number of generated artifacts and SI systems which can be feasibly evaluated. Directly motivated by these shortcomings, CAEMSI provides a robust and scalable approach to the evaluation problem. Normalized Compression Distance, a domain-independent distance metric, is used to measure the distance between individual artifacts within a corpus. The difference between corpora is measured using test statistics derived from these inter-artifact distances, and permutation testing is used to determine the significance of the difference. We provide empirical evidence validating the statistical significance tests, using datasets from two distinct domains.",
"title": ""
},
{
"docid": "d9df73b22013f7055fe8ff28f3590daa",
"text": "The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.",
"title": ""
},
{
"docid": "7ec5faf2081790e7baa1832d5f9ab5bd",
"text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.",
"title": ""
},
{
"docid": "e6f39c99c98770efeb99ba5ed03b9fd9",
"text": "UNLABELLED\nGames and their use in rehabilitation have formed a new and rapidly growing area of research. A critical hardware component of rehabilitation programs is the input device that measures the patients' movements. After Microsoft released Kinect, extensive research has been initiated on its applications as an input device for rehabilitation. However, since most of the works in this area rely on a qualitative determination of the joints' movements rather than an accurate quantitative one, detailed analysis of patients' movements is hindered. The aim of this article is to determine the accuracy of the Kinect's joint tracking. To fulfill this task, a model of upper body was fabricated. The displacements of the joint centers were estimated by Kinect at different positions and were then compared with the actual ones from measurement. Moreover, the dependency of Kinect's error on distance and joint type was measured and analyzed.\n\n\nIMPLICATIONS FOR REHABILITATION\nIt measures and reports the accuracy of a sensor that can be directly used for monitoring physical therapy exercises. Using this sensor facilitates remote rehabilitation.",
"title": ""
},
{
"docid": "86ba97e91a8c2bcb1015c25df7c782db",
"text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.",
"title": ""
},
{
"docid": "baae0ce9d52f47386447b729ff174b62",
"text": "Receptor for advanced glycation end products (RAGE) is a member of the immunoglobulin superfamily of cell surface molecules and engages diverse ligands relevant to distinct pathological processes. One class of RAGE ligands includes glycoxidation products, termed advanced glycation end products, which occur in diabetes, at sites of oxidant stress in tissues, and in renal failure and amyloidoses. RAGE also functions as a signal transduction receptor for amyloid beta peptide, known to accumulate in Alzheimer disease in both affected brain parenchyma and cerebral vasculature. Interaction of RAGE with these ligands enhances receptor expression and initiates a positive feedback loop whereby receptor occupancy triggers increased RAGE expression, thereby perpetuating another wave of cellular activation. Sustained expression of RAGE by critical target cells, including endothelium, smooth muscle cells, mononuclear phagocytes, and neurons, in proximity to these ligands, sets the stage for chronic cellular activation and tissue damage. In a model of accelerated atherosclerosis associated with diabetes in genetically manipulated mice, blockade of cell surface RAGE by infusion of a soluble, truncated form of the receptor completely suppressed enhanced formation of vascular lesions. Amelioration of atherosclerosis in these diabetic/atherosclerotic animals by soluble RAGE occurred in the absence of changes in plasma lipids or glycemia, emphasizing the contribution of a lipid- and glycemia-independent mechanism(s) to atherogenesis, which we postulate to be interaction of RAGE with its ligands. Future studies using mice in which RAGE expression has been genetically manipulated and with selective low molecular weight RAGE inhibitors will be required to definitively assign a critical role for RAGE activation in diabetic vasculopathy. However, sustained receptor expression in a microenvironment with a plethora of ligand makes possible prolonged receptor stimulation, suggesting that interaction of cellular RAGE with its ligands could be a factor contributing to a range of important chronic disorders.",
"title": ""
},
{
"docid": "138fc7af52066e890b45afd96debbe91",
"text": "We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerI. A. Mantilla-Gaviria · J. V. Balbastre-Tejedor Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera S/N, 46022 Edificio 8G, Acceso B, Valencia, Spain e-mail: iamantillagaviria@gmail.com J. V. Balbastre-Tejedor e-mail: jbalbast@itaca.upv.es M. Leonardi · G. Galati (B) DIE, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy e-mail: gaspare.galati@uniroma2.it; gaspare.galati@gmail.com M. Leonardi e-mail: mauro.leonardi@uniroma2.it ically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided.",
"title": ""
},
{
"docid": "30c796b96ab06a017bb02993158c3260",
"text": "Vectorization has been an important method of using data-level parallelism to accelerate scientific workloads on vector machines such as Cray for the past three decades. In the last decade it has also proven useful for accelerating multi-media and embedded applications on short SIMD architectures such as MMX, SSE and AltiVec. Most of the focus has been directed at innermost loops, effectively executing their iterations concurrently as much as possible. Outer loop vectorization refers to vectorizing a level of a loop nest other than the innermost, which can be beneficial if the outer loop exhibits greater data-level parallelism and locality than the innermost loop. Outer loop vectorization has traditionally been performed by interchanging an outer-loop with the innermost loop, followed by vectorizing it at the innermost position. A more direct unroll-and-jam approach can be used to vectorize an outer-loop without involving loop interchange, which can be especially suitable for short SIMD architectures.\n In this paper we revisit the method of outer loop vectorization, paying special attention to properties of modern short SIMD architectures. We show that even though current optimizing compilers for such targets do not apply outer-loop vectorization in general, it can provide significant performance improvements over innermost loop vectorization. Our implementation of direct outer-loop vectorization, available in GCC 4.3, achieves speedup factors of 3.13 and 2.77 on average across a set of benchmarks, compared to 1.53 and 1.39 achieved by innermost loop vectorization, when running on a Cell BE SPU and PowerPC970 processors respectively. Moreover, outer-loop vectorization provides new reuse opportunities that can be vital for such short SIMD architectures, including efficient handling of alignment. We present an optimization tapping such opportunities, capable of further boosting the performance obtained by outer-loop vectorization to achieve average speedup factors of 5.26 and 3.64.",
"title": ""
},
{
"docid": "f1a5c64dae0b41324ffeef568769e6e5",
"text": "Media content has become the major traffic of Internet and will keep on increasing rapidly. Various innovative media applications, services, devices have emerged and people tend to consume more media contents. We are meeting a media revolution. But media processing requires great capacity and capability of computing resources. Meanwhile cloud computing has emerged as a prosperous technology and the cloud computing platform has become a fundamental facility providing various services, great computing power, massive storage and bandwidth with modest cost. The integration of cloud computing and media processing is therefore a natural choice for both of them, and hence comes forth the media cloud. In this paper we make a comprehensive overview on the recent media cloud research work. We first discuss the challenges of the media cloud, and then summarize its architecture, the processing, and its storage and delivery mechanisms. As the result, we propose a new architecture for the media cloud. At the end of this paper, we make suggestions on how to build a media cloud and propose several future research topics as the conclusion.",
"title": ""
},
{
"docid": "abbe8df334ebea53b1b3770851019a2c",
"text": "Blockchain, a distributed secure digital ledger technology, is a relatively recent development with potentially transformational implications for economy and society. Its specific characteristics enable new decentralized models of distributed and trusted transactions. This position paper explores the implications of blockchain for collaborative networked organizations. In particular we aim at understanding the implications for companies in various economic sectors, and how new forms of networked organizations and new business models will be enabled. We also will focus on enablers of blockchain innovations, in particular with respect to governance of blockchain-based platforms and business networks. The paper results in a discussion of research challenges in the field of blockchain-enabled collaborative networked",
"title": ""
}
] |
scidocsrr
|
169a02aec0f94ab2ae787a050ed22cc8
|
A Maturity Model for Assessing the Digital Readiness of Manufacturing Companies
|
[
{
"docid": "4e143c7d29dae1bd4ee05be94ef0478b",
"text": "Since the Software Engineering Institute has launch ed the Capability Maturity Model almost twenty years ago, hundreds of maturity models have been pr oposed by researchers and practitioners across multiple application domains. With process orientat ion being a central paradigm of organizational design and continuous process improvement taking to p positions on CIO agendas, maturity models are also prospering in business process management. Although the application of maturity models is increasing in quantity and breadth, the concept of maturity models is frequently subject to criticism. Indeed, numerous shortcomings have been disclosed r ef rring to both maturity models as design products and the process of maturity model design. Whereas research has already substantiated the design process, there is no holistic understanding of the principles of form and function – that is, t he design principles – maturity models should meet. We therefore propose a pragmatic, yet well-founded framework of general design principles justified by existing literature and grouped according to typical purposes of use. The framework is demonstrated using an exemplary set of maturity models related to business process management. We finally give a b rief outlook on implications and topics for further research.",
"title": ""
},
{
"docid": "8cb6a2a3014bd3a7f945abd4cb2ffe88",
"text": "In order to identify and explore the strength and weaknesses of particular organizational designs, a wide range of maturity models have been developed by both, practitioners and academics over the past years. However, a systematization and generalization of the procedure on how to design maturity models as well as a synthesis of design science research with the rather behavioural field of organization theory is still lacking. Trying to combine the best of both fields, a first design proposition of a situational maturity model is presented in this paper. The proposed maturity model design is illustrated with the help of an instantiation for the healthcare domain.",
"title": ""
},
{
"docid": "c24e523997eac6d1be9e2a2f38150fc0",
"text": "We address the assessment and improvement of the software maintenance function by proposing improvements to the software maintenance standards and introducing a proposed maturity model for daily software maintenance activities: Software Maintenance Maturity Model (SM). The software maintenance function suffers from a scarcity of management models to facilitate its evaluation, management, and continuous improvement. The SM addresses the unique activities of software maintenance while preserving a structure similar to that of the CMMi4 maturity model. It is designed to be used as a complement to this model. The SM is based on practitioners experience, international standards, and the seminal literature on software maintenance. We present the models purpose, scope, foundation, and architecture, followed by its initial validation.",
"title": ""
}
] |
[
{
"docid": "bc58f2f9f6f5773f5f8b2696d9902281",
"text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.",
"title": ""
},
{
"docid": "13774d2655f2f0ac575e11991eae0972",
"text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.",
"title": ""
},
{
"docid": "1202e46fcc6c2f88b81fcf153ed4fd7d",
"text": "Recently, several high dimensional classification methods have been proposed to automatically discriminate between patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and elderly controls (CN) based on T1-weighted MRI. However, these methods were assessed on different populations, making it difficult to compare their performance. In this paper, we evaluated the performance of ten approaches (five voxel-based methods, three methods based on cortical thickness and two methods based on the hippocampus) using 509 subjects from the ADNI database. Three classification experiments were performed: CN vs AD, CN vs MCIc (MCI who had converted to AD within 18 months, MCI converters - MCIc) and MCIc vs MCInc (MCI who had not converted to AD within 18 months, MCI non-converters - MCInc). Data from 81 CN, 67 MCInc, 39 MCIc and 69 AD were used for training and hyperparameters optimization. The remaining independent samples of 81 CN, 67 MCInc, 37 MCIc and 68 AD were used to obtain an unbiased estimate of the performance of the methods. For AD vs CN, whole-brain methods (voxel-based or cortical thickness-based) achieved high accuracies (up to 81% sensitivity and 95% specificity). For the detection of prodromal AD (CN vs MCIc), the sensitivity was substantially lower. For the prediction of conversion, no classifier obtained significantly better results than chance. We also compared the results obtained using the DARTEL registration to that using SPM5 unified segmentation. DARTEL significantly improved six out of 20 classification experiments and led to lower results in only two cases. Overall, the use of feature selection did not improve the performance but substantially increased the computation times.",
"title": ""
},
{
"docid": "b3fb796dc943121e4a8114f8ba5e8d97",
"text": "HyperLogLog Counting is widely used in cardinality estimation. It is the foundation of many algorithms in data analysis, commodity recommendation and database optimization. Facing the large scale internet business like electronic commerce, internet companies have an urgent requirement of distributed real-time cardinality estimation with high accuracy and low time cost. In this paper, we propose a distributed real-time cardinality estimation algorithm named Hermes. Hermes adjusts the estimated cardinality dynamically according to the result of HyperLogLog Counting and also optimizes the data distribution strategy of existing distributed cardinality estimation algorithms. Experiments have been carried out and the results show that Hermes has lower estimation error and time cost compared with existing algorithms.",
"title": ""
},
{
"docid": "9c707afc8a0312ebab0ebd1b7fcb4c47",
"text": "This paper develops analytical principles for torque ripple reduction in interior permanent magnet (IPM) synchronous machines. The significance of slot harmonics and the benefits of stators with odd number of slots per pole pair are highlighted. Based on these valuable analytical insights, this paper proposes coordination of the selection of stators with odd number of slots per pole pair and IPM rotors with multiple layers of flux barriers in order to reduce torque ripple. The effectiveness of using stators with odd number of slots per pole pair in reducing torque ripple is validated by applying a finite-element-based Monte Carlo optimization method to four IPM machine topologies, which are combinations of two stator topologies (even or odd number of slots per pole pair) and two IPM rotor topologies (one- or two-layer). It is demonstrated that the torque ripple can be reduced to less than 5% by selecting a stator with an odd number of slots per pole pair and the IPM rotor with optimized barrier configurations, without using stator/rotor skewing or rotor pole shaping.",
"title": ""
},
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
},
{
"docid": "cc57e023628ec7ca1bfc91c40fc58341",
"text": "The design of electromagnetic interference (EMI) input filters, needed for switched power converters to fulfill the regulatory standards, is typically associated with high development effort. This paper presents a guideline for a simplified differential-mode (DM) filter design. First, a procedure to estimate the required filter attenuation based on the total input rms current using only a few equations is given. Second, a volume optimization of the needed DM filter based on the previously calculated filter attenuation and volumetric component parameters is introduced. It is shown that a minimal volume can be found for a certain optimal number of filter stages. The considerations are exemplified for two single-phase power factor correction converters operated in continuous and discontinuous conduction modes, respectively. Finally, EMI measurements done with a 300-W power converter prototype prove the proposed filter design method.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
},
{
"docid": "06459f19ea1f29973110549543b289fd",
"text": "The way mobile computing devices and applications are developed, deployed and used today does not meet the expectations of the user community and falls far short of the potential for pervasive computing. This paper challenges the mobile computing community by questioning the roles of devices, applications, and a user's environment. A vision of pervasive computing is described, along with attributes of a new application model that supports this vision, and a set of challenges that must be met in order to bring the vision to reality.",
"title": ""
},
{
"docid": "87748bcc07ab498218233645bdd4dd0c",
"text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.",
"title": ""
},
{
"docid": "0b1db23ae4767d7653e3198919706e99",
"text": "Greenhouse cultivation has evolved from simple covered rows of open-fields crops to highly sophisticated controlled environment agriculture (CEA) facilities that projected the image of plant factories for urban agriculture. The advances and improvements in CEA have promoted the scientific solutions for the efficient production of plants in populated cities and multi-story buildings. Successful deployment of CEA for urban agriculture requires many components and subsystems, as well as the understanding of the external influencing factors that should be systematically considered and integrated. This review is an attempt to highlight some of the most recent advances in greenhouse technology and CEA in order to raise the awareness for technology transfer and adaptation, which is necessary for a successful transition to urban agriculture. This study reviewed several aspects of a high-tech CEA system including improvements in the frame and covering materials, environment perception and data sharing, and advanced microclimate control and energy optimization models. This research highlighted urban agriculture and its derivatives, including vertical farming, rooftop greenhouses and plant factories which are the extensions of CEA and have emerged as a response to the growing population, environmental degradation, and urbanization that are threatening food security. Finally, several opportunities and challenges have been identified in implementing the integrated CEA and vertical farming for urban agriculture.",
"title": ""
},
{
"docid": "7487b31ad0dce0b24ad20c25a67f2bf8",
"text": "A large number of novel encodings for bag of visual words models have been proposed in the past two years to improve on the standard histogram of quantized local features. Examples include locality-constrained linear encoding [23], improved Fisher encoding [17], super vector encoding [27], and kernel codebook encoding [20]. While several authors have reported very good results on the challenging PASCAL VOC classification data by means of these new techniques, differences in the feature computation and learning algorithms, missing details in the description of the methods, and different tuning of the various components, make it impossible to compare directly these methods and hard to reproduce the results reported. This paper addresses these shortcomings by carrying out a rigorous evaluation of these new techniques by: (1) fixing the other elements of the pipeline (features, learning, tuning); (2) disclosing all the implementation details, and (3) identifying both those aspects of each method which are particularly important to achieve good performance, and those aspects which are less critical. This allows a consistent comparative analysis of these encoding methods. Several conclusions drawn from our analysis cannot be inferred from the original publications.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bc892fe2a369f701e0338085eaa0bdbd",
"text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.",
"title": ""
},
{
"docid": "6c8445b5fec9022a968d3551efb8972b",
"text": "Face Recognition by a robot or machine is one of the challenging research topics in the recent years. It has become an active research area which crosscuts several disciplines such as image processing, pattern recognition, computer vision, neural networks and robotics. For many applications, the performances of face recognition systems in controlled environments have achieved a satisfactory level. However, there are still some challenging issues to address in face recognition under uncontrolled conditions. The variation in illumination is one of the main challenging problems that a practical face recognition system needs to deal with. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals (Adini et al., 1997). Various methods have been proposed to solve the problem. These methods can be classified into three categories, named face and illumination modeling, illumination invariant feature extraction and preprocessing and normalization. In this chapter, an extensive and state-of-the-art study of existing approaches to handle illumination variations is presented. Several latest and representative approaches of each category are presented in detail, as well as the comparisons between them. Moreover, to deal with complex environment where illumination variations are coupled with other problems such as pose and expression variations, a good feature representation of human face should not only be illumination invariant, but also robust enough against pose and expression variations. Local binary pattern (LBP) is such a local texture descriptor. In this chapter, a detailed study of the LBP and its several important extensions is carried out, as well as its various combinations with other techniques to handle illumination invariant face recognition under a complex environment. By generalizing different strategies in handling illumination variations and evaluating their performances, several promising directions for future research have been suggested. This chapter is organized as follows. Several famous methods of face and illumination modeling are introduced in Section 2. In Section 3, latest and representative approaches of illumination invariant feature extraction are presented in detail. More attentions are paid on quotient-image-based methods. In Section 4, the normalization methods on discarding low frequency coefficients in various transformed domains are introduced with details. In Section 5, a detailed introduction of the LBP and its several important extensions is presented, as well as its various combinations with other face recognition techniques. In Section 6, comparisons between different methods and discussion of their advantages and disadvantages are presented. Finally, several promising directions as the conclusions are drawn in Section 7.",
"title": ""
},
{
"docid": "fc2046c92508cb0d6fe2b60c0eb8d2be",
"text": "Voting is an inherent process in a democratic society. Other methods for expressing the society participants’ will for example caucuses in US party elections or Landsgemeine in Switzerland can be inconvenient for the citizens and logistically difficult to organize. Furthermore, beyond inconvenience, there may be legitimate reasons for not being able to take part in the voting process, e.g. being deployed overseas in military or being on some other official assignment. Even more, filling in paper ballots and counting them is error-prone and time-consuming process. A well-known controversy took place during US presidental election in 2000 [Florida recount 2000], when a partial recount of the votes could have changed the outcome of the elections. As the recount was cancelled by the court, the actual result was not never known. Decline in elections’ participation rate has been observed in many old democracies [Summers 2016] and it should be the decision-makers goal to bring the electorate back to the polling booths. One way to do that would be to use internet voting. In this method, the ballots are cast using a personal computer or a smart phone and it sent over the internet to the election committee. However, there have been several critics against the internet voting methods [Springall et al. 2014]. In this report we consider, how to make internet voting protocols more secure by using blockchain.",
"title": ""
},
{
"docid": "27a583d33644887ad126e8e4844dd2e3",
"text": "In this work, we will explore different approaches used in Cross-Lingual Information Retrieval (CLIR) systems. Mainly, CLIR systems which use statistical machine translation (SMT) systems to translate queries into collection language. This will include using SMT systems as a black box or as a white box, also the SMT systems that are tuned towards better CLIR performance. After that, we will present our approach to rerank the alternative translations using machine learning regression model. This includes also introducing our set of features which we used to train the model. After that, we adapt this reranker for new languages. We also present our query expansion approach using word-embeddings model that is trained on medical data. Finally we reinvestigate translating the document collection into query language, then we present our future work.",
"title": ""
},
{
"docid": "35dc1eed6439bae9c74605e75bf8b3a2",
"text": "We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] |
scidocsrr
|
4cc97f01ed0be002bc18abbe3dc0a186
|
Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
|
[
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
}
] |
[
{
"docid": "fa88a823e05586bd3000461992a29af9",
"text": "Evaluation metrics for image captioning face two challenges. Firstly, commonly used metrics such as CIDEr, METEOR, ROUGE and BLEU often do not correlate well with human judgments. Secondly, each metric has well known blind spots to pathological caption constructions, and rule-based metrics lack provisions to repair such blind spots once identified. For example, the newly proposed SPICE correlates well with human judgments, but fails to capture the syntactic structure of a sentence. To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions. In addition, we further propose a data augmentation scheme to explicitly incorporate pathological transformations as negative examples during training. The proposed metric is evaluated with three kinds of robustness tests and its correlation with human judgments. Extensive experiments show that the proposed data augmentation scheme not only makes our metric more robust toward several pathological transformations, but also improves its correlation with human judgments. Our metric outperforms other metrics on both caption level human correlation in Flickr 8k and system level human correlation in COCO. The proposed approach could be served as a learning based evaluation metric that is complementary to existing rule-based metrics.",
"title": ""
},
{
"docid": "cae689b8a27b05318088a16eaccd85b4",
"text": "In recent years, electronic product have been demanded more functionalities, miniaturization, higher performance, reliability and low cost. Therefore, IC chip is required to deliver more signal I/O and better electrical characteristics under the same package footprint. None-Lead Bump Array (NBA) Chip Scale Structure is then developed to meet those requirements offering better electrical performance, more I/O accommodation and high transmission speed. To evaluate NBA package capability, the solder joint life, package warpage, die corner stress and thermal performance are characterized. Firstly, investigations on the warpage, die corner stress and thermal performance of NBA-QFN structure are performed by the use of Finite Element Method (FEM). Secondly, experiments are conducted for the solder joint reliability performance with different solder coverage and standoff height In the conclusion of this study, NBA-QFN would have no warpage risk, lower die corner stress and better thermal performance than TFBGA from simulation result. Beside that, the simulation result shows good agreement with experimental data. From the drop test study, with solder coverage less than 50% and standoff height lower than 40um would perform better solder joint life than others.",
"title": ""
},
{
"docid": "38fd6a2b2ea49fda599a70ec7e803cde",
"text": "The role of trace elements in biological systems has been described in several animals. However, the knowledge in fish is mainly limited to iron, copper, manganese, zinc and selenium as components of body fluids, cofactors in enzymatic reactions, structural units of non-enzymatic macromolecules, etc. Investigations in fish are comparatively complicated as both dietary intake and waterborne mineral uptake have to be considered in determining the mineral budgets. The importance of trace minerals as essential ingredients in diets, although in small quantities, is also evident in fish.",
"title": ""
},
{
"docid": "4ac083b7e2900eb5cc80efd6022c76c1",
"text": "We investigate the problem of reconstructing normals, albedo and lights of Lambertian surfaces in uncalibrated photometric stereo under the perspective projection model. Our analysis is based on establishing the integrability constraint. In the orthographic projection case, it is well-known that when such constraint is imposed, a solution can be identified only up to 3 parameters, the so-called generalized bas-relief (GBR) ambiguity. We show that in the perspective projection case the solution is unique. We also propose a closed-form solution which is simple, efficient and robust. We test our algorithm on synthetic data and publicly available real data. Our quantitative tests show that our method outperforms all prior work of uncalibrated photometric stereo under orthographic projection.",
"title": ""
},
{
"docid": "264fef3aa71df1f661f2b94461f9634c",
"text": "This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.",
"title": ""
},
{
"docid": "c926d9a6b6fe7654e8409ae855bdeb20",
"text": "A low-power, 40-Gb/s optical transceiver front-end is demonstrated in a 45-nm silicon-on-insulator (SOI) CMOS process. Both single-ended and differential optical modulators are demonstrated with floating-body transistors to reach output swings of more than 2 VPP and 4 VPP, respectively. A single-ended gain of 7.6 dB is measured over 33 GHz. The optical receiver consists of a transimpedance amplifier (TIA) and post-amplifier with 55 dB ·Ω of transimpedance over 30 GHz. The group-delay variation is ±3.9 ps over the 3-dB bandwidth and the average input-referred noise density is 20.5 pA/(√Hz) . The TIA consumes 9 mW from a 1-V supply for a transimpedance figure of merit of 1875 Ω /pJ. This represents the lowest power consumption for a transmitter and receiver operating at 40 Gb/s in a CMOS process.",
"title": ""
},
{
"docid": "60556a58af0196cc0032d7237636ec52",
"text": "This paper investigates what students understand about algorithm efficiency before receiving any formal instruction on the topic. We gave students a challenging search problem and two solutions, then asked them to identify the more efficient solution and to justify their choice. Many students did not use the standard worst-case analysis of algorithms; rather they chose other metrics, including average-case, better for more cases, better in all cases, one algorithm being more correct, and better for real-world scenarios. Students were much more likely to choose the correct algorithm when they were asked to trace the algorithms on specific examples; this was true even if they traced the algorithms incorrectly.",
"title": ""
},
{
"docid": "368996ab544c51c540afe129ffb65275",
"text": "Humans are experts at high-fidelity imitation – closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task. The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions.",
"title": ""
},
{
"docid": "771b1e44b26f749f6ecd9fe515159d9c",
"text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.",
"title": ""
},
{
"docid": "c171254eae86ce30c475c4355ed8879f",
"text": "The rapid growth of connected things across the globe has been brought about by the deployment of the Internet of things (IoTs) at home, in organizations and industries. The innovation of smart things is envisioned through various protocols, but the most prevalent protocols are pub-sub protocols such as Message Queue Telemetry Transport (MQTT) and Advanced Message Queuing Protocol (AMQP). An emerging paradigm of communication architecture for IoTs support is Fog computing in which events are processed near to the place they occur for efficient and fast response time. One of the major concerns in the adoption of Fog computing based publishsubscribe protocols for the Internet of things is the lack of security mechanisms because the existing security protocols such as SSL/TSL have a large overhead of computations, storage and communications. To address these issues, we propose a secure, Fog computing based publish-subscribe lightweight protocol using Elliptic Curve Cryptography (ECC) for the Internet of Things. We present analytical proofs and results for resource efficient security, comparing to the existing protocols of traditional Internet.",
"title": ""
},
{
"docid": "e016c72bf2c3173d5c9f4973d03ab380",
"text": "SDN controllers demand tight performance guarantees over the control plane actions performed by switches. For example, traffic engineering techniques that frequently reconfigure the network require guarantees on the speed of reconfiguring the network. Initial experiments show that poor performance of Ternary Content-Addressable Memory (TCAM) control actions (e.g., rule insertion) can inflate application performance by a factor of 2x! Yet, modern switches provide no guarantees for these important control plane actions -- inserting, modifying, or deleting rules.\n In this paper, we present the design and evaluation of Hermes, a practical and immediately deployable framework that offers a novel method for partitioning and optimizing switch TCAM to enable performance guarantees. Hermes builds on recent studies on switch performance and provides guarantees by trading-off a nominal amount of TCAM space for assured performance. We evaluated Hermes using large-scale simulations. Our evaluations show that with less than 5% overheads, Hermes provides 5ms insertion guarantees that translates into an improvement of application level metrics by up to 80%. Hermes is more than 50% better than existing state of the art techniques and provides significant improvement for traditional networks running BGP.",
"title": ""
},
{
"docid": "501d6ec6163bc8b93fd728412a3e97f3",
"text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.",
"title": ""
},
{
"docid": "0b2ae99927b9006fd41b07e4d58a2e82",
"text": "Our increasingly digital life provides a wealth of data about our behavior, beliefs, mood, and well-being. This data provides some insight into the lives of patients outside the healthcare setting, and in aggregate can be insightful for the person's mental health and emotional crisis. Here, we introduce this community to some of the recent advancement in using natural language processing and machine learning to provide insight into mental health of both individuals and populations. We advocate using these linguistic signals as a supplement to those that are collected in the health care system, filling in some of the so-called “whitespace” between visits.",
"title": ""
},
{
"docid": "e9229d3ab3e9ec7e5020e50ca23ada0b",
"text": "Human beings have been recently reviewed as ‘metaorganisms’ as a result of a close symbiotic relationship with the intestinal microbiota. This assumption imposes a more holistic view of the ageing process where dynamics of the interaction between environment, intestinal microbiota and host must be taken into consideration. Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbial ecosystem. Here we review the current knowledge of the changes occurring in the gut microbiota of old people, especially in the light of the most recent applications of the modern molecular characterisation techniques. The hypothetical involvement of the age-related gut microbiota unbalances in the inflamm-aging, and immunosenescence processes will also be discussed. Increasing evidence of the importance of the gut microbiota homeostasis for the host health has led to the consideration of medical/nutritional applications of this knowledge through the development of probiotic and prebiotic preparations specific for the aged population. The results of the few intervention trials reporting the use of pro/prebiotics in clinical conditions typical of the elderly will be critically reviewed.",
"title": ""
},
{
"docid": "c9bc670fae6dd0f2274bb18492260372",
"text": "We present an efficient GPU-based parallel LSH algorithm to perform approximate k-nearest neighbor computation in high-dimensional spaces. We use the Bi-level LSH algorithm, which can compute k-nearest neighbors with higher accuracy and is amenable to parallelization. During the first level, we use the parallel RP-tree algorithm to partition datasets into several groups so that items similar to each other are clustered together. The second level involves computing the Bi-Level LSH code for each item and constructing a hierarchical hash table. The hash table is based on parallel cuckoo hashing and Morton curves. In the query step, we use GPU-based work queues to accelerate short-list search, which is one of the main bottlenecks in LSH-based algorithms. We demonstrate the results on large image datasets with 200,000 images which are represented as 512 dimensional vectors. In practice, our GPU implementation can obtain more than 40X acceleration over a single-core CPU-based LSH implementation.",
"title": ""
},
{
"docid": "fdba7b3ae6e266b938eeb73f5fd93962",
"text": "Prostatic artery embolization (PAE) is an alternative treatment for benign prostatic hyperplasia. Complications are primarily related to non-target embolization. We report a case of ischemic rectitis in a 76-year-old man with significant lower urinary tract symptoms due to benign prostatic hyperplasia, probably related to nontarget embolization. Magnetic resonance imaging revealed an 85.5-g prostate and urodynamic studies confirmed Inferior vesical obstruction. PAE was performed bilaterally. During the first 3 days of follow-up, a small amount of blood mixed in the stool was observed. Colonoscopy identified rectal ulcers at day 4, which had then disappeared by day 16 post PAE without treatment. PAE is a safe, effective procedure with a low complication rate, but interventionalists should be aware of the risk of rectal nontarget embolization.",
"title": ""
},
{
"docid": "e79e94549bca30e3a4483f7fb9992932",
"text": "The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköping University. Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "a2e597c8e4ff156eaa72a4981b81df8d",
"text": "OBJECTIVE\nAggregation and deposition of amyloid beta (Abeta) in the brain is thought to be central to the pathogenesis of Alzheimer's disease (AD). Recent studies suggest that cerebrospinal fluid (CSF) Abeta levels are strongly correlated with AD status and progression, and may be a meaningful endophenotype for AD. Mutations in presenilin 1 (PSEN1) are known to cause AD and change Abeta levels. In this study, we have investigated DNA sequence variation in the presenilin (PSEN1) gene using CSF Abeta levels as an endophenotype for AD.\n\n\nMETHODS\nWe sequenced the exons and flanking intronic regions of PSEN1 in clinically characterized research subjects with extreme values of CSF Abeta levels.\n\n\nRESULTS\nThis novel approach led directly to the identification of a disease-causing mutation in a family with late-onset AD.\n\n\nINTERPRETATION\nThis finding suggests that CSF Abeta may be a useful endophenotype for genetic studies of AD. Our results also suggest that PSEN1 mutations can cause AD with a large range in age of onset, spanning both early- and late-onset AD.",
"title": ""
},
{
"docid": "72fec6dc287b0aa9aea97a22268c1125",
"text": "Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. We show how the modified alternating projections method can be used to compute the solution for the more commonly used of the weighted Frobenius norms. In the finance application the original matrix has many zero or negative eigenvalues; we show that for a certain class of weights the nearest correlation matrix has correspondingly many zero eigenvalues and that this fact can be exploited in the computation.",
"title": ""
}
] |
scidocsrr
|
f1ee3d65fae8212a76e30e038be722c6
|
How to Protect ADS-B: Confidentiality Framework and Efficient Realization Based on Staged Identity-Based Encryption
|
[
{
"docid": "47c723b0c41fb26ed7caa077388e2e1b",
"text": "Automatic dependent surveillance-broadcast (ADS-B) is the communications protocol currently being rolled out as part of next-generation air transportation systems. As the heart of modern air traffic control, it will play an essential role in the protection of two billion passengers per year, in addition to being crucial to many other interest groups in aviation. The inherent lack of security measures in the ADS-B protocol has long been a topic in both the aviation circles and in the academic community. Due to recently published proof-of-concept attacks, the topic is becoming ever more pressing, particularly with the deadline for mandatory implementation in most airspaces fast approaching. This survey first summarizes the attacks and problems that have been reported in relation to ADS-B security. Thereafter, it surveys both the theoretical and practical efforts that have been previously conducted concerning these issues, including possible countermeasures. In addition, the survey seeks to go beyond the current state of the art and gives a detailed assessment of security measures that have been developed more generally for related wireless networks such as sensor networks and vehicular ad hoc networks, including a taxonomy of all considered approaches.",
"title": ""
},
{
"docid": "6d18ef0d7e78a970c46c7c8f68675e85",
"text": "Aircraft data communications and networking are key enablers for civilian air transportation systems to meet projected aviation demands of the next 20 years and beyond. In this paper, we show how the envisioned e-enabled aircraft plays a central role in streamlining system modernization efforts. We show why performance targets such as safety, security, capacity, efficiency, environmental benefit, travel comfort, and convenience will heavily depend on communications, networking and cyber-physical security capabilities of the e-enabled aircraft. The paper provides a comprehensive overview of the state-of-the-art research and standardization efforts. We highlight unique challenges, recent advances, and open problems in enhancing operations as well as certification of the future e-enabled aircraft.",
"title": ""
},
{
"docid": "d83853692581644f3a86ad0e846c48d2",
"text": "This paper investigates cyber security issues with automatic dependent surveillance broadcast (ADS-B) based air traffic control. Before wide-scale deployment in civil aviation, any airborne or ground-based technology must be ensured to have no adverse impact on safe and profitable system operations, both under normal conditions and failures. With ADS-B, there is a lack of a clear understanding about vulnerabilities, how they can impact airworthiness and what failure conditions they can potentially induce. The proposed work streamlines a threat assessment methodology for security evaluation of ADS-B based surveillance. To the best of our knowledge, this work is the first to identify the need for mechanisms to secure ADS-B based airborne surveillance and propose a security solution. This paper presents preliminary findings and results of the ongoing investigation.12",
"title": ""
}
] |
[
{
"docid": "a49b2152082aa23f9b90d298064b9733",
"text": "The number of steps required to compute a function depends, in general, on the type of computer that is used, on the choice of computer program, and on the input-output code. Nevertheless, the results obtained in this paper are so general as to be nearly independent of these considerations.\nA function is exhibited that requires an enormous number of steps to be computed, yet has a “nearly quickest” program: Any other program for this function, no matter how ingeniously designed it may be, takes practically as many steps as this nearly quickest program.\nA different function is exhibited with the property that no matter how fast a program may be for computing this function another program exists for computing the function very much faster.",
"title": ""
},
{
"docid": "75519b3621d66f55202ce4cbecc8bff1",
"text": "belief-network inference Adnan Darwiche and Gregory Provan Rockwell Science Center 1049 Camino Dos Rios Thousand Oaks, CA 91360 fdarwiche, provang@risc.rockwell.com Abstract We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each non-leaf node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks | we show how they can be generated using the clustering algorithm. The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The main value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on di erent software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm.",
"title": ""
},
{
"docid": "9817009ca281ae09baf45b5f8bdef87d",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",
"title": ""
},
{
"docid": "df155f17d4d810779ee58bafcaab6f7b",
"text": "OBJECTIVE\nTo explore the types, prevalence and associated variables of cyberbullying among students with intellectual and developmental disability attending special education settings.\n\n\nMETHODS\nStudents (n = 114) with intellectual and developmental disability who were between 12-19 years of age completed a questionnaire containing questions related to bullying and victimization via the internet and cellphones. Other questions concerned sociodemographic characteristics (IQ, age, gender, diagnosis), self-esteem and depressive feelings.\n\n\nRESULTS\nBetween 4-9% of students reported bullying or victimization of bullying at least once a week. Significant associations were found between cyberbullying and IQ, frequency of computer usage and self-esteem and depressive feelings. No associations were found between cyberbullying and age and gender.\n\n\nCONCLUSIONS\nCyberbullying is prevalent among students with intellectual and developmental disability in special education settings. Programmes should be developed to deal with this issue in which students, teachers and parents work together.",
"title": ""
},
{
"docid": "fe194d00c129e05f17e7926d15f37c37",
"text": "Synthesis, simulation and experiment of unequally spaced resonant slotted-waveguide antenna arrays based on the infinite wavelength propagation property of composite right/left-handed (CRLH) waveguide has been demonstrated in this paper. Both the slot element spacing and excitation amplitude of the antenna array can be adjusted to tailor the radiation pattern. A specially designed shorted CRLH waveguide, as the feed structure of the antenna array, is to work at the infinite wavelength propagation frequency. This ensures that all unequally spaced slot elements along the shorted CRLH waveguide wall can be excited either inphase or antiphase. Four different unequally spaced resonant slotted-waveguide antenna arrays are designed to form pencil, flat-topped and difference beam patterns. Through the synthesis, simulation and experiment, it proves that the proposed arrays are able to exhibit better radiation performances than conventional resonant slotted-waveguide antenna arrays.",
"title": ""
},
{
"docid": "7ff0befa9e6d5694228a8199cd3c1c8c",
"text": "This article examined the effects of product aesthetics on several outcome variables in usability tests. Employing a computer simulation of a mobile phone, 60 adolescents (14-17 yrs) were asked to complete a number of typical tasks of mobile phone users. Two functionally identical mobile phones were manipulated with regard to their visual appearance (highly appealing vs not appealing) to determine the influence of appearance on perceived usability, performance measures and perceived attractiveness. The results showed that participants using the highly appealing phone rated their appliance as being more usable than participants operating the unappealing model. Furthermore, the visual appearance of the phone had a positive effect on performance, leading to reduced task completion times for the attractive model. The study discusses the implications for the use of adolescents in ergonomic research.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "c04ae48f1ff779da8a565653c0976636",
"text": "It is widely agreed on that most cognitive processes are contextual in the sense that they depend on the environment, or context, inside which they are carried on. Even concentrating on the issue of contextuality in reasoning, many different notions of context can be found in the Artificial Intelligence literature, see for instance [Giunchiglia 1991a, Giunchiglia & Weyhrauch 1988, Guha 1990, Guha & Lenat 1990, Shoham 1991, McCarthy 1990b]. Our intuition is that reasoning is usually performed on a subset of the global knowledge base; we never consider all we know but only a very small subset of it. The notion of context is used as a means of formalizing this idea of localization. Roughly speaking, we take a context to be the set of facts used locally to prove a given goal plus the inference routines used to reason about them (which in general are different for different sets of facts). Our perspective is similar to that proposed in [McCarthy 1990b, McCarthy 1991]. The goal of this paper is to propose an epistemologically adequate theory of reasoning with contexts. The emphasis is on motivations and intuitions, rather than on technicalities. The two basic definitions are reported in appendix A. Ideas are described incrementally with increasing level of detail. Thus, section 2 describes why contexts are an important notion to consider as part of our ontology. This is achieved also by comparing contexts with situations, another ontologically very important concept. Section 3 then goes more into the technical details and proposes that contexts should be formalized as particular mathematical objects, namely as logical theories. Reasoning with contexts is then formalized as a set of deductions, each deduction carried out inside a context, connected by appropriate \"bridge rules\". Finally, section 4 describes how an important example of common sense reasoning, reasoning about reasoning, can be formalized as multicontextual reasoning.",
"title": ""
},
{
"docid": "9c183992492880d8b6e1a644e014a72f",
"text": "Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward-Roger's adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance-covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh-Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires N ≥ K. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.",
"title": ""
},
{
"docid": "2119a6fcc721124690d6cc2fe6552724",
"text": "A development of humanoid robot HRP-2 is presented in this paper. HRP-2 is a humanoid robotics platform, which we developed in phase two of HRP. HRP was a humanoid robotics project, which had run by the Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with uneven surface, can walk at two third level of human speed, and can walk on a narrow path. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot's own self if HRP-2 tips over safely. In this paper, the appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are also introduced.",
"title": ""
},
{
"docid": "9df6e9bd41b7a5c48f10cd542fa5e6d9",
"text": "Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models.",
"title": ""
},
{
"docid": "13800973a4bc37f26319c0bb76fce731",
"text": "Light fields are a powerful concept in computational imaging and a mainstay in image-based rendering; however, so far their acquisition required either carefully designed and calibrated optical systems (micro-lens arrays), or multi-camera/multi-shot settings. Here, we show that fully calibrated light field data can be obtained from a single ordinary photograph taken through a partially wetted window. Each drop of water produces a distorted view on the scene, and the challenge of recovering the unknown mapping from pixel coordinates to refracted rays in space is a severely underconstrained problem. The key idea behind our solution is to combine ray tracing and low-level image analysis techniques (extraction of 2D drop contours and locations of scene features seen through drops) with state-of-the-art drop shape simulation and an iterative refinement scheme to enforce photo-consistency across features that are seen in multiple views. This novel approach not only recovers a dense pixel-to-ray mapping, but also the refractive geometry through which the scene is observed, to high accuracy. We therefore anticipate that our inherently self-calibrating scheme might also find applications in other fields, for instance in materials science where the wetting properties of liquids on surfaces are investigated.",
"title": ""
},
{
"docid": "37653b46f34b1418ad7dbfc59cbfe16a",
"text": "The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.",
"title": ""
},
{
"docid": "7bc81d5c42266a75fe46d99a76b0861d",
"text": "Stem cells continue to garner attention by the news media and play a role in public and policy discussions of emerging technologies. As new media platforms develop, it is important to understand how different news media represents emerging stem cell technologies and the role these play in public discussions. We conducted a comparative analysis of newspaper and sports websites coverage of one recent high profile case: Gordie Howe’s stem cell treatment in Mexico. Using qualitative coding methods, we analyzed news articles and readers’ comments from Canadian and US newspapers and sports websites. Results indicate that the efficacy of stem cell treatments is often assumed in news coverage and readers’ comments indicate a public with a wide array of beliefs and perspectives on stem cells and their clinical efficacy. Media coverage that presents uncritical perspectives on unproven stem cell therapies may create patient expectations, may have an affect on policy discussions, and help to feed the marketing of unproven therapies. However, news coverage that provides more balanced or critical coverage of unproven stem cell treatments may also inspire more critical discussion, as reflected in readers’ comments.",
"title": ""
},
{
"docid": "9e35454e25d78714576f140928d4a666",
"text": "Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., “My house is bigger than me.” However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, “Tyler entered his house” implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance.",
"title": ""
},
{
"docid": "b3449b09e45cb56e2dbd91d82c18752a",
"text": "Applications with a dynamic workload demand need access to a flexible infrastructure to meet performance guarantees and minimize resource costs. While cloud computing provides the elasticity to scale the infrastructure on demand, cloud service providers lack control and visibility of user space applications, making it difficult to accurately scale the underlying infrastructure. Thus, the burden of scaling falls on the user. In this paper, we propose a new cloud service, Dependable Compute Cloud (DC2), that automatically scales the infrastructure to meet the user-specified performance requirements. DC2 employs Kalman filtering to automatically learn the (possibly changing) system parameters for each application, allowing it to proactively scale the infrastructure to meet performance guarantees. DC2 is designed for the cloud it is application-agnostic and does not require any offline application profiling or benchmarking. Our implementation results on OpenStack using a multi-tier application under a range of workload traces demonstrate the robustness and superiority of DC2 over existing rule-based approaches.",
"title": ""
},
{
"docid": "42b1052a0d1e1536228b1b90602051ea",
"text": "Improving the quality of healthcare and the prospects of \"aging in place\" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.",
"title": ""
},
{
"docid": "61f5586aa35d4804c336f88603fc18a6",
"text": "The authors use the term, “Group Model Building” (Richardson and Andersen 1995; Vennix 1996; 1999) to refer to a bundle of techniques used to construct system dynamics models working directly with client groups on key strategic decisions. We use facilitated face-to-face meetings to elicit model structure and to engage client teams directly in the process of model conceptualization, formulation, analysis, and decision making.",
"title": ""
},
{
"docid": "b4462bf06bac13af9e40023019619a78",
"text": "Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound",
"title": ""
}
] |
scidocsrr
|
00c876c636eb89f05b9aedcdca7fcee3
|
Modeling avalanche breakdown for ESD diodes in integrated circuits
|
[
{
"docid": "3c4219212dfeb01d2092d165be0cfb44",
"text": "Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "d7ee1f283cf930310743c98ad8137bcf",
"text": "The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting. A novel deep learning architecture performs device-independent tissue segmentation of clinical 3D retinal images followed by separate diagnostic classification that meets or exceeds human expert clinical diagnoses of retinal disease.",
"title": ""
},
{
"docid": "24c744337d831e541f347bbdf9b6b48a",
"text": "Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot \"Engineer\" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.",
"title": ""
},
{
"docid": "f1582ae3d1ce78c1ad84ab5e552e29bd",
"text": "The emergence of sensory-guided behavior depends on sensorimotor coupling during development. How sensorimotor experience shapes neural processing is unclear. Here, we show that the coupling between motor output and visual feedback is necessary for the functional development of visual processing in layer 2/3 (L2/3) of primary visual cortex (V1) of the mouse. Using a virtual reality system, we reared mice in conditions of normal or random visuomotor coupling. We recorded the activity of identified excitatory and inhibitory L2/3 neurons in response to transient visuomotor mismatches in both groups of mice. Mismatch responses in excitatory neurons were strongly experience dependent and driven by a transient release from inhibition mediated by somatostatin-positive interneurons. These data are consistent with a model in which L2/3 of V1 computes a difference between an inhibitory visual input and an excitatory locomotion-related input, where the balance between these two inputs is finely tuned by visuomotor experience.",
"title": ""
},
{
"docid": "d9324f415de22d8f2dfbc49c0f81d241",
"text": "Agriculture has been one of the most important industries in human history since it provides humans with absolutely indispensable resources such as food, fiber, and energy. The agriculture industry could be further developed by employing new technologies, in particular, the Internet of Things (IoT). In this paper, we present a connected farm based on IoT systems, which aims to provide smart farming systems for end users. A detailed design and implementation for connected farms are illustrated, and its advantages are explained with service scenarios compared to previous smart farms. We hope this work will show the power of IoT as a disruptive technology helping across multi industries including agriculture.",
"title": ""
},
{
"docid": "d527daf7ae59c7bcf0989cad3183efbe",
"text": "In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.",
"title": ""
},
{
"docid": "03a6425423516d0f978bb5f8abe0d62d",
"text": "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive selfimprovement.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "b9b634c93f2cc216370a94128aeab596",
"text": "Life-cycle models of labor supply predict a positive relationship between hours supplied and transitory changes in wages. We tested this prediction ",
"title": ""
},
{
"docid": "670556463e3204a98b1e407ea0619a1f",
"text": "1 Ekaterina Prasolova-Forland, IDI, NTNU, Sem Salandsv 7-9, N-7491 Trondheim, Norway ekaterip@idi.ntnu.no Abstract This paper discusses awareness support in educational context, focusing on the support offered by collaborative virtual environments. Awareness plays an important role in everyday educational activities, especially in engineering courses where projects and group work is an integral part of the curriculum. In this paper we will provide a general overview of awareness in computer supported cooperative work and then focus on the awareness mechanisms offered by CVEs. We will also discuss the role and importance of these mechanisms in educational context and make some comparisons between awareness support in CVEs and in more traditional tools.",
"title": ""
},
{
"docid": "5e9d63bfc3b4a66e0ead79a2d883adfe",
"text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.",
"title": ""
},
{
"docid": "958cde8dec4d8df9c6b6d83a7740e2d0",
"text": "Distributed applications use replication, implemented by protocols like Paxos, to ensure data availability and transparently mask server failures. This paper presents a new approach to achieving replication in the data center without the performance cost of traditional methods. Our work carefully divides replication responsibility between the network and protocol layers. The network orders requests but does not ensure reliable delivery – using a new primitive we call ordered unreliable multicast (OUM). Implementing this primitive can be achieved with near-zero-cost in the data center. Our new replication protocol, NetworkOrdered Paxos (NOPaxos), exploits network ordering to provide strongly consistent replication without coordination. The resulting system not only outperforms both latencyand throughput-optimized protocols on their respective metrics, but also yields throughput within 2% and latency within 16 μs of an unreplicated system – providing replication without the performance cost.",
"title": ""
},
{
"docid": "4f2e6de82e2a79ce26a4b26b3177e977",
"text": "The World Wide Web has become the hotbed of a multi-billion dollar underground economy among cyber criminals whose victims range from individual Internet users to large corporations and even government organizations. As phishing attacks are increasingly being used by criminals to facilitate their cyber schemes, it is important to develop effective phishing detection tools. In this paper, we propose a rule-based method to detect phishing webpages. We first study a number of phishing websites to examine various tactics employed by phishers and generate a rule set based on observations. We then use Decision Tree and Logistic Regression learning algorithms to apply the rules and achieve 95-99% accuracy, with a false positive rate of 0.5-1.5% and modest false negatives. Thus, it is demonstrated that our rulebased method for phishing detection achieves performance comparable to learning machine based methods, with the great advantage of understandable rules derived from experience. KeywordsPhishing attack, phishing website, rule-based, machine learning, phishing detection, decision tree",
"title": ""
},
{
"docid": "a814fedf9bedf31911f8db43b0d494a5",
"text": "A critical period for language learning is often defined as a sharp decline in learning outcomes with age. This study examines the relevance of the critical period for English speaking proficiency among immigrants in the US. It uses microdata from the 2000 US Census, a model of language acquisition, and a flexible specification of an estimating equation based on 64 age-at-migration dichotomous variables. Self-reported English speaking proficiency among immigrants declines more-or-less monotonically with age at migration, and this relationship is not characterized by any sharp decline or discontinuity that might be considered consistent with a “critical” period. The findings are robust across the various immigrant samples, and between the genders. (110 words).",
"title": ""
},
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "218c5fdd541a839094e8010ed6a56d22",
"text": "In this paper, we propose a consistent-aware deep learning (CADL) framework for person re-identification in a camera network. Unlike most existing person re-identification methods which identify whether two body images are from the same person, our approach aims to obtain the maximal correct matches for the whole camera network. Different from recently proposed camera network based re-identification methods which only consider the consistent information in the matching stage to obtain a global optimal association, we exploit such consistent-aware information under a deep learning framework where both feature representation and image matching are automatically learned with certain consistent constraints. Specifically, we reach the global optimal solution and balance the performance between different cameras by optimizing the similarity and association iteratively. Experimental results show that our method obtains significant performance improvement and outperforms the state-of-the-art methods by large margins.",
"title": ""
},
{
"docid": "1b5427ff132a4ace0031b667eb6ff5f3",
"text": "The obesity epidemic shows no signs of abating. There is an urgent need to push back against the environmental forces that are producing gradual weight gain in the population. Using data from national surveys, we estimate that affecting energy balance by 100 kilocalories per day (by a combination of reductions in energy intake and increases in physical activity) could prevent weight gain in most of the population. This can be achieved by small changes in behavior, such as 15 minutes per day of walking or eating a few less bites at each meal. Having a specific behavioral target for the prevention of weight gain may be key to arresting the obesity epidemic.",
"title": ""
},
{
"docid": "34976e12739060a443ad0cfbb373fd3b",
"text": "The detection of failures is a fundamental issue for fault-tolerance in distributed systems. Recently, many people have come to realize that failure detection ought to be provided as some form of generic service, similar to IP address lookup or time synchronization. However, this has not been successful so far; one of the reasons being the fact that classical failure detectors were not designed to satisfy several application requirements simultaneously. We present a novel abstraction, called accrual failure detectors, that emphasizes flexibility and expressiveness and can serve as a basic building block to implementing failure detectors in distributed systems. Instead of providing information of a binary nature (trust vs. suspect), accrual failure detectors output a suspicion level on a continuous scale. The principal merit of this approach is that it favors a nearly complete decoupling between application requirements and the monitoring of the environment. In this paper, we describe an implementation of such an accrual failure detector, that we call the /spl phi/ failure detector. The particularity of the /spl phi/ failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We analyzed the behavior of our /spl phi/ failure detector over an intercontinental communication link over a week. Our experimental results show that if performs equally well as other known adaptive failure detection mechanisms, with an improved flexibility.",
"title": ""
},
{
"docid": "35724d9d93c5780cac4287fc866a3529",
"text": "Advancing research into autonomous micro aerial vehicle navigation requires data structures capable of representing indoor and outdoor 3D environments. The vehicle must be able to update the map structure in real time using readings from range-finding sensors when mapping unknown areas; it must also be able to look up occupancy information from the map for the purposes of localization and path-planning. Mapping models that have been used for these tasks include voxel grids, multi-level surface maps, and octrees. In this paper, we suggest a new approach to 3D mapping using a multi-volume occupancy grid, or MVOG. MVOGs explicitly store information about both obstacles and free space. This allows us to correct previous potentially erroneous sensor readings by incrementally fusing in new positive or negative sensor information. In turn, this enables extracting more reliable probabilistic information about the occupancy of 3D space. MVOGs outperform existing probabilistic 3D mapping methods in terms of memory usage, due to the fact that observations are grouped together into continuous vertical volumes to save space. We describe the techniques required for mapping using MVOGs, and analyze their performance using indoor and outdoor experimental data.",
"title": ""
},
{
"docid": "a688f040f616faff3db13be4b1c052df",
"text": "Intracellular fucoidanase was isolated from the marine bacterium, Formosa algae strain KMM 3553. The first appearance of fucoidan enzymatic hydrolysis products in a cell-free extract was detected after 4 h of bacterial growth, and maximal fucoidanase activity was observed after 12 h of growth. The fucoidanase displayed maximal activity in a wide range of pH values, from 6.5 to 9.1. The presence of Mg2+, Ca2+ and Ba2+ cations strongly activated the enzyme; however, Cu2+ and Zn2+ cations had inhibitory effects on the enzymatic activity. The enzymatic activity of fucoidanase was considerably reduced after prolonged (about 60 min) incubation of the enzyme solution at 45 °C. The fucoidanase catalyzed the hydrolysis of fucoidans from Fucus evanescens and Fucus vesiculosus, but not from Saccharina cichorioides. The fucoidanase also did not hydrolyze carrageenan. Desulfated fucoidan from F. evanescens was hydrolysed very weakly in contrast to deacetylated fucoidan, which was hydrolysed more actively compared to the native fucoidan from F. evanescens. Analysis of the structure of the enzymatic products showed that the marine bacteria, F. algae, synthesized an α-l-fucanase with an endo-type action that is specific for 1→4-bonds in a polysaccharide molecule built up of alternating three- and four-linked α-l-fucopyranose residues sulfated mainly at position 2.",
"title": ""
},
{
"docid": "0d95f43ba40942b83e5f118b01ebf923",
"text": "Containers are a lightweight virtualization method for running multiple isolated Linux systems under a common host operating system. Container-based computing is revolutionizing the way applications are developed and deployed. A new ecosystem has emerged around the Docker platform to enable container based computing. However, this revolution has yet to reach the HPC community. In this paper, we provide background on Linux Containers and Docker, and how they can be of value to the scientific and HPC community. We will explain some of the use cases that motivate the need for user defined images and the uses of Docker. We will describe early work in deploying and integrating Docker into an HPC environment, and some of the pitfalls and challenges we encountered. We will discuss some of the security implications of using Docker and how we have addressed those for a shared user system typical of HPC centers. We will also provide performance measurements to illustrate the low overhead of containers. While our early work has been on cluster-based/CS-series systems, we will describe some preliminary assessment of supporting Docker on Cray XC series supercomputers, and a potential partnership with Cray to explore the feasibility and approaches to using Docker on large systems. Keywords-Docker; User Defined Images; containers; HPC systems",
"title": ""
}
] |
scidocsrr
|
1f5ff340f15bcde7cc6736ffda487d6c
|
Adaptive Loss Minimization for Semi-Supervised Elastic Embedding
|
[
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
},
{
"docid": "6228f059be27fa5f909f58fb60b2f063",
"text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.",
"title": ""
},
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "e0cc48dc60f6c79befb8584cee95e9ea",
"text": "Neural Network approaches to time series prediction are briefly discussed, and the need to specify an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are introduced, and the number of false neighbours heuristic is described, as a means of finding the correct embedding dimension, and thence window size. The method is applied to three time series and the resulting generalisation performance of the trained feed-forward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architecture.",
"title": ""
},
{
"docid": "dbafe7db0387b56464ac630404875465",
"text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.",
"title": ""
},
{
"docid": "6fa454fc02b5f52e08e6ab0de657ed6b",
"text": "Large numbers of children in the world are acquiring one language as their native language and subsequently learning another. There are also many children who are acquiring two or more languages simultaneously in early childhood as part of the natural consequences of being a member of bilingual families and communities. Because bilingualism brings about advantages to children that have an effect on their future development, understanding differences between monolinguals and bilinguals becomes a question of interest. However, on tests of vocabulary bilinguals frequently seem to perform at lower levels than monolinguals (Ben Zeev, 1977b; Doyle, Champagne, & Segalowitz, 1978). The reason for this seems to be that bilingual children have to learn two different labels for everything, which reduces the frequency of a particular word in either language (Ben Zeev, 1977b). This makes the task of acquiring, sorting, and differentiating vocabulary and meaning in two languages much more difficult when compared to the monolingual child’s task in one language (Doyle et al., 1978). Many researchers (Genesee & Nicoladis, 1995; Patterson, 1998; Pearson, Fernandez, and Oller, 1993) have raised questions about the appropriateness of using monolingual vocabulary norms to evaluate bilinguals. In the past, when comparing monolingual and bilingual performance, researchers mainly considered only one language of the bilingual (Ben Zeev, 1977b; Bialystok, 1988; Doyle et al., 1978). However, there is considerable evidence of a vocabulary overlap in the lexicon of bilingual children’s two languages, differing from child to child (Umbel, Pearson, Fernandez, and Oller, 1992). This vocabulary overlap is attributed to the child acquiring each language in different contexts resulting in some areas of complementary knowledge across the two languages (Saunders, 1982). It is crucial to examine both languages of bilingual children and account for this overlap in order to assess the size of bilinguals’ vocabulary with validity. This has been very difficult to do, since there are a few standardized measures for vocabulary knowledge in two languages concurrently and no measure are normed for bilingual preschool age children. It has been suggested that when the vocabulary scores of tests in both languages of the bilingual child are combined, their vocabulary equals or exceeds that of monolingual children (Bialystok, 1988; Doyle et al., 1978; Genesee & Nicoladis, 1995). However, this measure of Total Vocabulary (total scores achieved in language A + language B) is not sufficient for the examination of differences in vocabulary size of bilinguals and monolinguals due to the vocabulary overlap. A measure of total unique words or Conceptual Vocabulary, which is a combination of vocabulary scores in both languages considering words describing the same concept as one word, provides additional information about bilinguals’ vocabulary size with regards to knowledge of concepts. Pearson et al. (1993) conducted the only study considering both Total Vocabulary (language A + language B) and Conceptual Vocabulary (language A U language B) for bilingual children in comparison to their monolingual peers. Based on a sample of 25 simultaneous English/Spanish bilinguals and 35 monolinguals it was suggested that there exists no basis for concluding that the bilingual children were slower to develop early vocabulary than were their monolingual peers. There is a possibility that quite the opposite is true with regards to vocabulary comprehension when both languages are involved. There is a need for further study evaluating vocabulary size of preschool bilinguals to verify patterns identified by Pearson et al. (1993).",
"title": ""
},
{
"docid": "7ec6790b96e9185bf822eea3a27ad7ab",
"text": "Multi-level converter architectures have been explored for a variety of applications including high-power DC-AC inverters and DC-DC converters. In this work, we explore flying-capacitor multi-level (FCML) DC-DC topologies as a class of hybrid switched-capacitor/inductive converter. Compared to other candidate architectures in this area (e.g. Series-Parallel, Dickson), FCML converters have notable advantages such as the use of single-rated low-voltage switches, potentially lower switching loss, lower passive component volume, and enable regulation across the full VDD-VOUT range. It is shown that multimode operation, including previously published resonant and dynamic off-time modulation, form a single set of techniques that can be used to extend high efficiency over a wide power density range. Some of the general operating considerations of FCML converters, such as the challenge of maintaining voltage balance on flying capacitors, are shown to be of equal concern in other soft-switched SC converter topologies. Experimental verification from a 24V:12V, 3-level converter is presented to show multimode operation with a nominally 2:1 topology. A second 50V:7V 4-level FCML converter demonstrates operation with variable regulation. A method is presented to balance flying capacitor voltages through low frequency closed-loop feedback.",
"title": ""
},
{
"docid": "b69dd5f570f9a1996fe743d5038dbc6c",
"text": "With the development of deep learning and artificial intelligence, more and more research apply neural networks to natural language processing tasks. However, while the majority of these research take English corpus as the dataset, few studies have been done using Chinese corpus. Meanwhile, Existing Chinese processing algorithms typically regard Chinese word or Chinese character as the basic unit but ignore the deeper information into the Chinese character. In Chinese linguistic, strokes are the basic unit of Chinese character who are similar to letters of the English word. Inspired by the recent success of deep learning at character-level, we delve deeper to Chinese stroke level for Chinese language processing and developed it into service for Chinese text classification. In this paper, we dig the basic feature of the strokes considering the similar Chinese character components and propose a new method to leverage Chinese stroke for learning the continuous representation of Chinese character and develop it into a service for Chinese text classification. We develop a dedicated neural architecture based on the convolutional neural network to effectively learn character embedding and apply it to Chinese word similarity judgment and Chinese text classification. Both experiments results show that the stroke level method is effective for Chinese language processing.",
"title": ""
},
{
"docid": "7530a79035a1d2b73d7ef5e38dda942b",
"text": "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.",
"title": ""
},
{
"docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99",
"text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.",
"title": ""
},
{
"docid": "1757d8eee607b80b6b590ed8ca1e77b2",
"text": "The proximity of cells in three-dimensional (3D) organization maximizes the cell-cell communication and signaling that are critical for cell function. In this study, 3D cell aggregates composed of human umbilical vein endothelial cells (HUVECs) and cord-blood mesenchymal stem cells (cbMSCs) were used for therapeutic neovascularization to rescue tissues from critical limb ischemia. Within the cell aggregates, homogeneously mixed HUVECs and cbMSCs had direct cell-cell contact with expressions of endogenous extracellular matrices and adhesion molecules. Although dissociated HUVECs/cbMSCs initially formed tubular structures on Matrigel, the grown tubular network substantially regressed over time. Conversely, 3D HUVEC/cbMSC aggregates seeded on Matrigel exhibited an extensive tubular network that continued to expand without regression. Immunostaining experiments show that, by differentiating into smooth muscle cell (SMC) lineages, the cbMSCs stabilize the HUVEC-derived tubular network. The real-time PCR analysis results suggest that, through myocardin, TGF-β signaling regulates the differentiation of cbMSCs into SMCs. Transplantation of 3D HUVEC/cbMSC aggregates recovered blood perfusion in a mouse model of hindlimb ischemia more effectively compared to their dissociated counterparts. The experimental results confirm that the transplanted 3D HUVEC/cbMSC aggregates enhanced functional vessel formation within the ischemic limb and protected it from degeneration. The 3D HUVEC/cbMSC aggregates can therefore facilitate the cell-based therapeutic strategies for modulating postnatal neovascularization.",
"title": ""
},
{
"docid": "9151a96cd2d1552dc15e0d5ff07b6108",
"text": "Making correct decisions often requires analysing large volumes of textual information. Text Mining is a budding new field that endeavours to garner meaningful information from natural language text. Text Mining is the process of applying automatic methods to analyse and structure textual data in order to create useable knowledge from previously unstructured information. Text Mining is inherently interdisciplinary, borrowing heavily from neighbouring fields such as data mining and computational linguistics. Some real application to define the state-of-the-art in Text Mining and to single out future needs and scenarios are collected.",
"title": ""
},
{
"docid": "c59cae78ce3482450776755b9d9d5199",
"text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.",
"title": ""
},
{
"docid": "5d953232681e6815ccd85e2b1b600465",
"text": "Bandwidth and power constraints are the main concerns in current wireless networks because mul tihop ad hoc mobile wireless networks rely on each node in the network to act as a router and packet forwarder This dependency places bandwidth power and computation demands on mobile hosts which must be taken into account when choosing the best routing protocol In recent years protocols that build routes based on demand have been proposed The major goal of on demand routing protocols is to minimize control traffic overhead In this paper we perform a simulation and performance study on some routing protocols for ad hoc networks Distributed Bellman Ford a traditional table driven routing algorithm is simulated to evaluate its performance in multihop wireless networks In addition two on demand routing protocols Dynamic Source Routing and Associativity Based Routing with distinctive route selection algorithms are simulated in a common environment to quantitatively mea sure and contrast their performance The final selection of an appropriate protocol will depend on a variety of factors which are discussed in this paper",
"title": ""
},
{
"docid": "9128809af50519d2d0ef3a0ee520e569",
"text": "It has been experimentally observed that distributed implementations of mini-batch stochastic gradient descent (SGD) algorithms exhibit speedup saturation and decaying generalization ability beyond a particular batch-size. In this work, we present an analysis hinting that high similarity between concurrently processed gradients may be a cause of this performance degradation. We introduce the notion of gradient diversity that measures the dissimilarity between concurrent gradient updates, and show its key role in the performance of mini-batch SGD. We prove that on problems with high gradient diversity, mini-batch SGD is amenable to better speedups, while maintaining the generalization performance of serial (one sample) SGD. We further establish lower bounds on convergence where mini-batch SGD slows down beyond a particular batch-size, solely due to the lack of gradient diversity. We provide experimental evidence indicating the key role of gradient diversity in distributed learning, and discuss how heuristics like dropout, Langevin dynamics, and quantization can improve it.",
"title": ""
},
{
"docid": "d880535f198a1f0a26b18572f674b829",
"text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.",
"title": ""
},
{
"docid": "e7ff760dddadf1de42cfc0553f286fe6",
"text": "Fluorine-containing amino acids are valuable probes for the biophysical characterization of proteins. Current methods for (19)F-labeled protein production involve time-consuming genetic manipulation, compromised expression systems and expensive reagents. We show that Escherichia coli BL21, the workhorse of protein production, can utilise fluoroindole for the biosynthesis of proteins containing (19)F-tryptophan.",
"title": ""
},
{
"docid": "c2c38481e67fa3cb63a6e784c9d9144d",
"text": "Some property and casualty insurers use automated detection systems to help to decide whether or not to investigate claims suspected of fraud. Claim screening systems benefit from the coded experience of previously investigated claims. The embedded detection models typically consist of scoring devices relating fraud indicators to some measure of suspicion of fraud. In practice these scoring models often focus on minimizing the error rate rather than on the cost of (mis)classification. We show that focusing on cost is a profitable approach. We analyse the effects of taking into account information on damages and audit costs early on in the screening process. We discuss several scenarios using real-life data. The findings suggest that with claim amount information available at screening time detection rules can be accommodated to increase expected profits. Our results show the value of cost-sensitive claim fraud screening and provide guidance on how to render this strategy operational. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d1622f3a2cf81758fa2084506dcd65f2",
"text": "Students who enrol in the undergraduate program on informatics at the Hellenic Open University (HOU) demonstrate significant difficulties in advancing beyond the introductory courses. We have embarked in an effort to analyse their academic performance throughout the academic year, as measured by the homework assignments, and attempt to derive short rules that explain and predict success or failure in the final exams. In this paper we review previous approaches, compare them with genetic algorithm based induction of decision trees and argue why our approach has a potential for developing into an alert tool.",
"title": ""
},
{
"docid": "5df21fff08a770787ddce9224c611364",
"text": "Data clustering is an important data mining technology that plays a crucial role in numerous scientific applications. However, it is challenging due to the size of datasets has been growing rapidly to extra-large scale in the real world. Meanwhile, MapReduce is a desirable parallel programming platform that is widely applied in kinds of data process fields. In this paper, we propose an efficient parallel density-based clustering algorithm and implement it by a 4-stages MapReduce paradigm. Furthermore, we adopt a quick partitioning strategy for large scale non-indexed data. We study the metric of merge among bordering partitions and make optimizations on it. At last, we evaluate our work on real large scale datasets using Hadoop platform. Results reveal that the speedup and scale up of our work are very efficient.",
"title": ""
},
{
"docid": "3a1cc60b1b6729e06f178ab62d19c59c",
"text": "The Web 2.0 wave brings, among other aspects, the Programmable Web:increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS:SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.",
"title": ""
},
{
"docid": "b342443400c85277d4f980a39198ded0",
"text": "We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS’ parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of ≈ 1 KB), shorter signatures (≈ 30 KB instead of 41 KB), and faster signing and verification for the same security level as SPHINCS.",
"title": ""
},
{
"docid": "99ebd04c11db731653ba4b8f26c46208",
"text": "This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.",
"title": ""
}
] |
scidocsrr
|
7ecdfc08152fce6e5449249f3a8cafd3
|
Word Embeddings, Analogies, and Machine Learning: Beyond king - man + woman = queen
|
[
{
"docid": "f8cb44e765ad86bd18e5401283c7e0bf",
"text": "Distributional models represent a word through the contexts in which it has been observed. They can be used to predict similarity in meaning, based on the distributional hypothesis, which states that two words that occur in similar contexts tend to have similar meanings. Distributional approaches are often implemented in vector space models. They represent a word as a point in high-dimensional space, where each dimension stands for a context item, and a word’s coordinates represent its context counts. Occurrence in similar contexts then means proximity in space. In this survey we look at the use of vector space models to describe the meaning of words and phrases: the phenomena that vector space models address, and the techniques that they use to do so. Many word meaning phenomena can be described in terms of semantic similarity: synonymy, priming, categorization, and the typicality of a predicate’s arguments. But vector space models can do more than just predict semantic similarity. They are a very flexible tool, because they can make use of all of linear algebra, with all its data structures and operations. The dimensions of a vector space can stand for many things: context words, or non-linguistic context like images, or properties of a concept. And vector space models can use matrices or higher-order arrays instead of vectors for representing more complex relationships. Polysemy is a tough problem for distributional approaches, as a representation that is learned from all of a word’s contexts will conflate the different senses of the word. It can be addressed, using either clustering or vector combination techniques. Finally, we look at vector space models for phrases, which are usually constructed by combining word vectors. Vector space models for phrases can predict phrase similarity, and some argue that they can form the basis for a general-purpose representation framework for natural language semantics.",
"title": ""
},
{
"docid": "c45517d21c40bb935b0e1ff4d4ecdf85",
"text": "Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.",
"title": ""
},
{
"docid": "7d11d25dc6cd2822d7f914b11b7fe640",
"text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.",
"title": ""
},
{
"docid": "79a02a35c02858a6510fc92b9eadde4e",
"text": "Distributed word representations have been demonstrated to be effective in capturing semantic and syntactic regularities. Unsupervised representation learning from large unlabeled corpora can learn similar representations for those words that present similar cooccurrence statistics. Besides local occurrence statistics, global topical information is also important knowledge that may help discriminate a word from another. In this paper, we incorporate category information of documents in the learning of word representations and to learn the proposed models in a documentwise manner. Our models outperform several state-of-the-art models in word analogy and word similarity tasks. Moreover, we evaluate the learned word vectors on sentiment analysis and text classification tasks, which shows the superiority of our learned word vectors. We also learn high-quality category embeddings that reflect topical meanings.",
"title": ""
}
] |
[
{
"docid": "0614f84f0a5d62f707d545943b936667",
"text": "A new input-output coupled inductor (IOCI) is proposed for reducing current ripples and magnetic components. Moreover, a current-source-type circuit using active-clamp mechanism and a current doubler with synchronous rectifier are presented to achieve high efficiency in low input-output voltage applications. The configuration of the IOCI is realized by three windings on a common core, and has the properties of an input inductor at the input-side and two output inductors at the output- side. An active clamped ripple-free dc-dc converter using the proposed IOCI is analyzed in detail and optimized for high power efficiency. Experimental results for 80 W (5 V/16 A) at a constant switching frequency of 100 kHz are obtained to show the performance of the proposed converter.",
"title": ""
},
{
"docid": "850483f2db17a4f5d5a48db80d326dd3",
"text": "The Internet has revolutionized healthcare by offering medical information ubiquitously to patients via the web search. The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries. Aiming to better capture a focused picture of user's medical-related information search and shed insights on their healthcare information access strategies, it is challenging yet rewarding to detect structured user intentions from their diversely expressed medical text queries. We introduce a graph-based formulation to explore structured concept transitions for effective user intent detection in medical queries, where each node represents a medical concept mention and each directed edge indicates a medical concept transition. A deep model based on multi-task learning is introduced to extract structured semantic transitions from user queries, where the model extracts word-level medical concept mentions as well as sentence-level concept transitions collectively. A customized graph-based mutual transfer loss function is designed to impose explicit constraints and further exploit the contribution of mentioning a medical concept word to the implication of a semantic transition. We observe an 8% relative improvement in AUC and 23% relative reduction in coverage error by comparing the proposed model with the best baseline model for the concept transition inference task on real-world medical text queries.",
"title": ""
},
{
"docid": "8f6806ba2f75e3671efa2aa390d79b40",
"text": "Applying amendments to multi-element contaminated soils can have contradictory effects on the mobility, bioavailability and toxicity of specific elements, depending on the amendment. Trace elements and PAHs were monitored in a contaminated soil amended with biochar and greenwaste compost over 60 days field exposure, after which phytotoxicity was assessed by a simple bio-indicator test. Copper and As concentrations in soil pore water increased more than 30 fold after adding both amendments, associated with significant increases in dissolved organic carbon and pH, whereas Zn and Cd significantly decreased. Biochar was most effective, resulting in a 10 fold decrease of Cd in pore water and a resultant reduction in phytotoxicity. Concentrations of PAHs were also reduced by biochar, with greater than 50% decreases of the heavier, more toxicologically relevant PAHs. The results highlight the potential of biochar for contaminated land remediation.",
"title": ""
},
{
"docid": "0ce06f95b1dafcac6dad4413c8b81970",
"text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.",
"title": ""
},
{
"docid": "513239885e48a729e6f80a2df2e061c7",
"text": "Schemes for FPE enable one to encrypt Social Security numbers (SSNs), credit card numbers (CCNs), and the like, doing so in such a way that the ciphertext has the same format as the plaintext. In the case of SSNs, for example, this means that the ciphertext, like the plaintext, consists of a nine decimal-digit string. Similarly, encryption of a 16-digit CCN results in a 16-digit ciphertext. FPE is rapidly emerging as a useful cryptographic tool, with applications including financial-information security, data sanitization, and transparently encrypting fields in a legacy database.",
"title": ""
},
{
"docid": "4107e9288ea64d039211acf48a091577",
"text": "The trisomy 18 syndrome can result from a full, mosaic, or partial trisomy 18. The main clinical findings of full trisomy 18 consist of prenatal and postnatal growth deficiency, characteristic facial features, clenched hands with overriding fingers and nail hypoplasia, short sternum, short hallux, major malformations, especially of the heart, andprofound intellectual disability in the surviving older children. The phenotype of partial trisomy 18 is extremely variable. The aim of this article is to systematically review the scientific literature on patients with partial trisomy 18 in order to identify regions of chromosome 18 that may be responsible for the specific clinical features of the trisomy 18 syndrome. We confirmed that trisomy of the short arm of chromosome 18 does not seem to cause the major features. However, we found candidate regions on the long arm of chromosome 18 for some of the characteristic clinical features, and a thus a phenotypic map is proposed. Our findings confirm the hypothesis that single critical regions/candidate genes are likely to be responsible for specific characteristics of the syndrome, while a single critical region for the whole Edwards syndrome phenotype is unlikely to exist.",
"title": ""
},
{
"docid": "f0d17b259b699bc7fb7e8f525ec64db0",
"text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.",
"title": ""
},
{
"docid": "0b87e22007cef7546d7503821919e50b",
"text": "This review focuses on the antibacterial activities of visible light-responsive titanium dioxide (TiO2) photocatalysts. These photocatalysts have a range of applications including disinfection, air and water cleaning, deodorization, and pollution and environmental control. Titanium dioxide is a chemically stable and inert material, and can continuously exert antimicrobial effects when illuminated. The energy source could be solar light; therefore, TiO2 photocatalysts are also useful in remote areas where electricity is insufficient. However, because of its large band gap for excitation, only biohazardous ultraviolet (UV) light irradiation can excite TiO2, which limits its application in the living environment. To extend its application, impurity doping, through metal coating and controlled calcination, has successfully modified the substrates of TiO2 to expand its absorption wavelengths to the visible light region. Previous studies have investigated the antibacterial abilities of visible light-responsive photocatalysts using the model bacteria Escherichia coli and human pathogens. The modified TiO2 photocatalysts significantly reduced the numbers of surviving bacterial cells in response to visible light illumination. They also significantly reduced the activity of bacterial endospores; reducing their toxicity while retaining their germinating abilities. It is suggested that the photocatalytic killing mechanism initially damages the surfaces weak points of the bacterial cells, before totally breakage of the cell membranes. The internal bacterial components then leak from the cells through the damaged sites. Finally, the photocatalytic reaction oxidizes the cell debris. In summary, visible light-responsive TiO2 photocatalysts are more convenient than the traditional UV light-responsive TiO2 photocatalysts because they do not require harmful UV light irradiation to function. These photocatalysts, thus, provide a promising and feasible approach for disinfection of pathogenic bacteria; facilitating the prevention of infectious diseases.",
"title": ""
},
{
"docid": "78829447a6cbf0aa020ef098a275a16d",
"text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.",
"title": ""
},
{
"docid": "08a62894bac4e272530d1630e720c7ad",
"text": "Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attentions from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate the content deliveries and improve the quality of mobile services. In order to bring more intelligence to the edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with the mobile edge systems, for optimizing the mobile edge computing, caching and communication. And thus, we design the “In-Edge AI” framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. “In-Edge AI” is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to the mobile communication systems. Finally, we discuss several related challenges and opportunities for unveiling a promising upcoming future of “In-Edge AI”.",
"title": ""
},
{
"docid": "74959e138f7defce9bf7df2198b46a90",
"text": "In the game industry, especially for free to play games, player retention and purchases are important issues. There have been several approaches investigated towards predicting them by players' behaviours during game sessions. However, most current methods are only available for specific games because the data representations utilised are usually game specific. This work intends to use frequency of game events as data representations to predict both players' disengagement from game and the decisions of their first purchases. This method is able to provide better generality because events exist in every game and no knowledge of any event but their frequency is needed. In addition, this event frequency based method will also be compared with a recent work by Runge et al. [1] in terms of disengagement prediction.",
"title": ""
},
{
"docid": "c995426196ad943df2f5a4028a38b781",
"text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.",
"title": ""
},
{
"docid": "99549d037b403f78f273b3c64181fd21",
"text": "From social media has emerged continuous needs for automatic travel recommendations. Collaborative filtering (CF) is the most well-known approach. However, existing approaches generally suffer from various weaknesses. For example , sparsity can significantly degrade the performance of traditional CF. If a user only visits very few locations, accurate similar user identification becomes very challenging due to lack of sufficient information for effective inference. Moreover, existing recommendation approaches often ignore rich user information like textual descriptions of photos which can reflect users' travel preferences. The topic model (TM) method is an effective way to solve the “sparsity problem,” but is still far from satisfactory. In this paper, an author topic model-based collaborative filtering (ATCF) method is proposed to facilitate comprehensive points of interest (POIs) recommendations for social users. In our approach, user preference topics, such as cultural, cityscape, or landmark, are extracted from the geo-tag constrained textual description of photos via the author topic model instead of only from the geo-tags (GPS locations). Advantages and superior performance of our approach are demonstrated by extensive experiments on a large collection of data.",
"title": ""
},
{
"docid": "ad96c93d4a27ec8a5a1a8168519977ff",
"text": "BACKGROUND\nMovement velocity is an acute resistance-training variable that can be manipulated to potentially optimize dynamic muscular strength development. However, it is unclear whether performing faster or slower repetitions actually influences dynamic muscular strength gains.\n\n\nOBJECTIVE\nWe conducted a systematic review and meta-analysis to examine the effect of movement velocity during resistance training on dynamic muscular strength.\n\n\nMETHODS\nFive electronic databases were searched using terms related to movement velocity and resistance training. Studies were deemed eligible for inclusion if they met the following criteria: randomized and non-randomized comparative studies; published in English; included healthy adults; used isotonic resistance-exercise interventions directly comparing fast or explosive training to slower movement velocity training; matched in prescribed intensity and volume; duration ≥4 weeks; and measured dynamic muscular strength changes.\n\n\nRESULTS\nA total of 15 studies were identified that investigated movement velocity in accordance with the criteria outlined. Fast and moderate-slow resistance training were found to produce similar increases in dynamic muscular strength when all studies were included. However, when intensity was accounted for, there was a trend for a small effect favoring fast compared with moderate-slow training when moderate intensities, defined as 60-79% one repetition maximum, were used (effect size 0.31; p = 0.06). Strength gains between conditions were not influenced by training status and age.\n\n\nCONCLUSIONS\nOverall, the results suggest that fast and moderate-slow resistance training improve dynamic muscular strength similarly in individuals within a wide range of training statuses and ages. Resistance training performed at fast movement velocities using moderate intensities showed a trend for superior muscular strength gains as compared to moderate-slow resistance training. Both training practices should be considered for novice to advanced, young and older resistance trainers targeting dynamic muscular strength.",
"title": ""
},
{
"docid": "f7e773113b9006256ab51d975c8f53c5",
"text": "Received 12/4/2013 Accepted 19/6/2013 (006063) 1 Laboratorio Integral de Investigación en Alimentos – LIIA, Instituto Tecnológico de Tepic – ITT, Av. Tecnológico, 2595, CP 63175, Tepic, Nayarit, México, e-mail: efimontalvo@gmail.com 2 Dirección General de Innovación Tecnológica, Centro de Excelencia, Universidad Autónoma de Tamaulipas – UAT, Ciudad Victoria, Tamaulipas, México 3 Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada – CICATA, Instituto Politécnico Nacional – IPN, Querétaro, Querétaro, México *Corresponding author Effect of high hydrostatic pressure on antioxidant content of ‘Ataulfo’ mango during postharvest maturation Viviana Guadalupe ORTEGA1, José Alberto RAMÍREZ2, Gonzalo VELÁZQUEZ3, Beatriz TOVAR1, Miguel MATA1, Efigenia MONTALVO1*",
"title": ""
},
{
"docid": "47d2ebd3794647708d41c6b3d604e796",
"text": "Most stream data classification algorithms apply the supervised learning strategy which requires massive labeled data. Such approaches are impractical since labeled data are usually hard to obtain in reality. In this paper, we build a clustering feature decision tree model, CFDT, from data streams having both unlabeled and a small number of labeled examples. CFDT applies a micro-clustering algorithm that scans the data only once to provide the statistical summaries of the data for incremental decision tree induction. Micro-clusters also serve as classifiers in tree leaves to improve classification accuracy and reinforce the any-time property. Our experiments on synthetic and real-world datasets show that CFDT is highly scalable for data streams while generating high classification accuracy with high speed.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
},
{
"docid": "4c4bfcadd71890ccce9e58d88091f6b3",
"text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games",
"title": ""
},
{
"docid": "b19fa7fa211e36b0049fd5745e30f0c3",
"text": "Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled using the stochastic analysis and its performance compared with a typical multilevel Alexander PD-based CDR for equal-loop bandwidths. The rms jitter of the CDRs are predicted using a linear jitter model and a Markov chain and verified using behavioral simulations. Jitter tolerance simulations are also employed to compare the two CDRs. Both analytical calculations and behavioral simulations predict that at equal-loop bandwidths, the proposed architecture is superior to the Alexander type CDR at large ISI and low signal-to-noise ratios.",
"title": ""
},
{
"docid": "4c0427bd87ef200484f0a510e8acb0de",
"text": "Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present “Cavs”, a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function F and a dynamic instance-specific graph G. It avoids the overhead of repeated graph construction by only declaring and constructing F once, and allows for the use of static graph optimization techniques on pre-defined operations in F . Cavs performs training and inference by scheduling the execution of F following the dependencies in G, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.",
"title": ""
}
] |
scidocsrr
|
fed9694336c6085ed06a590e0c821402
|
New Simple-Structured AC Solid-State Circuit Breaker
|
[
{
"docid": "6af7f70f0c9b752d3dbbe701cb9ede2a",
"text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions",
"title": ""
},
{
"docid": "d8255047dc2e28707d711f6d6ff19e30",
"text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.",
"title": ""
}
] |
[
{
"docid": "d38f9ef3248bf54b7a073beaa186ad42",
"text": "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "99ba1fd6c96dad6d165c4149ac2ce27a",
"text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.",
"title": ""
},
{
"docid": "04d7b3e3584d89d5a3bc5c22c3fd1438",
"text": "With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.",
"title": ""
},
{
"docid": "0742314b8099dce0eadaa12f96579209",
"text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.",
"title": ""
},
{
"docid": "38d7107de35f3907c0e42b111883613e",
"text": "On-line social networks have become a massive communication and information channel for users world-wide. In particular, the microblogging platform Twitter, is characterized by short-text message exchanges at extremely high rates. In this type of scenario, the detection of emerging topics in text streams becomes an important research area, essential for identifying relevant new conversation topics, such as breaking news and trends. Although emerging topic detection in text is a well established research area, its application to large volumes of streaming text data is quite novel. Making scalability, efficiency and rapidness, the key aspects for any emerging topic detection algorithm in this type of environment.\n Our research addresses the aforementioned problem by focusing on detecting significant and unusual bursts in keyword arrival rates or bursty keywords. We propose a scalable and fast on-line method that uses normalized individual frequency signals per term and a windowing variation technique. This method reports keyword bursts which can be composed of single or multiple terms, ranked according to their importance. The average complexity of our method is O(n log n), where n is the number of messages in the time window. This complexity allows our approach to be scalable for large streaming datasets. If bursts are only detected and not ranked, the algorithm remains with lineal complexity O(n), making it the fastest in comparison to the current state-of-the-art. We validate our approach by comparing our performance to similar systems using the TREC Tweet 2011 Challenge tweets, obtaining 91% of matches with LDA, an off-line gold standard used in similar evaluations. In addition, we study Twitter messages related to the SuperBowl football events in 2011 and 2013.",
"title": ""
},
{
"docid": "c69d15a44bcb779394df5776e391ec23",
"text": "Ankylosing spondylitis (AS) is a chronic and inflammatory rheumatic disease, characterized by pain and structural and functional impairments, such as reduced mobility and axial deformity, which lead to diminished quality of life. Its treatment includes not only drugs, but also nonpharmacological therapy. Exercise appears to be a promising modality. The aim of this study is to review the current evidence and evaluate the role of exercise either on land or in water for the management of patients with AS in the biological era. Systematic review of the literature published until November 2016 in Medline, Embase, Cochrane Library, Web of Science and Scopus databases. Thirty-five studies were included for further analysis (30 concerning land exercise and 5 concerning water exercise; combined or not with biological drugs), comprising a total of 2515 patients. Most studies showed a positive effect of exercise on Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, pain, mobility, function and quality of life. The benefit was statistically significant in randomized controlled trials. Results support a multimodal approach, including educational sessions and maintaining home-based program. This study highlights the important role of exercise in management of AS, therefore it should be encouraged and individually prescribed. More studies with good methodological quality are needed to strengthen the results and to define the specific characteristics of exercise programs that determine better results.",
"title": ""
},
{
"docid": "699836a5b2caf6acde02c4bad16c2795",
"text": "Drilling end-effector is a key unit in autonomous drilling robot. The perpendicularity of the hole has an important influence on the quality of airplane assembly. Aiming at the robot drilling perpendicularity, a micro-adjusting attitude mechanism and a surface normal measurement algorithm are proposed in this paper. In the mechanism, two rounded eccentric discs are used and the small one is embedded in the big one, which makes the drill’s point static when adjusting the drill’s attitude. Thus, removal of drill’s point position after adjusting the drill attitude can be avoided. Before the micro-adjusting progress, four non-coplanar points in space are used to determine a unique sphere. The normal at the drilling point is measured by four laser ranging sensors. The adjusting angles at which the motors should be rotated to adjust attitude can be calculated by using the deviation between the normal and the drill axis. Finally, the motors will drive the two eccentric discs to achieve micro-adjusting progress. Experiments on drilling robot system and the results demonstrate that the adjusting mechanism and the algorithm for surface normal measurement are effective with high accuracy and efficiency. (1)设计一种微型姿态调整机构, 实现对钻头姿态进行调整, 使其沿制孔点法线进行制孔, 提高孔的垂直度. 使得钻头调整前后, 钻头顶点保持不变, 提高制孔效率. (2)利用4个激光测距传感器, 根据空间不共面四点确定唯一球, 测得制孔点处的法线向量, 为钻头的姿态调整做准备.",
"title": ""
},
{
"docid": "a05a953097e5081670f26e85c4b8e397",
"text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.",
"title": ""
},
{
"docid": "80666930dbabe1cd9d65af762cc4b150",
"text": "Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "53ae229e708297bf73cf3a33b32e42da",
"text": "Signal-dependent phase variation, AM/PM, along with amplitude variation, AM/AM, are known to determine nonlinear distortion characteristics of current-mode PAs. However, these distortion effects have been treated separately, putting more weight on the amplitude distortion, while the AM/PM generation mechanisms are yet to be fully understood. Hence, the aim of this work is to present a large-signal physical model that can describe both the AM/AM and AM/PM PA nonlinear distortion characteristics and their internal relationship.",
"title": ""
},
{
"docid": "c6d25017a6cba404922933672a18d08a",
"text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.",
"title": ""
},
{
"docid": "e33fd686860657a93a0e47807b4cbe24",
"text": "Planning optimal paths for large numbers of robots is computationally expensive. In this thesis, we present a new framework for multirobot path planning called subdimensional expansion, which initially plans for each robot individually, and then coordinates motion among the robots as needed. More specifically, subdimensional expansion initially creates a one-dimensional search space embedded in the joint configuration space of the multirobot system. When the search space is found to be blocked during planning by a robot-robot collision, the dimensionality of the search space is locally increased to ensure that an alternative path can be found. As a result, robots are only coordinated when necessary, which reduces the computational cost of finding a path. Subdimensional expansion is a flexible framework that can be used with multiple planning algorithms. For discrete planning problems, subdimensional expansion can be combined with A* to produce the M* algorithm, a complete and optimal multirobot path planning problem. When the configuration space of individual robots is too large to be explored effectively with A*, subdimensional expansion can be combined with probabilistic planning algorithms to produce sRRT and sPRM. M* is then extended to solve variants of the multirobot path planning algorithm. We present the Constraint Manifold Subsearch (CMS) algorithm to solve problems where robots must dynamically form and dissolve teams with other robots to perform cooperative tasks. Uncertainty M* (UM*) is a variant of M* that handles systems with probabilistic dynamics. Finally, we apply M* to multirobot sequential composition. Results are validated with extensive simulations and experiments on multiple physical robots.",
"title": ""
},
{
"docid": "73d31d63cfaeba5fa7c2d2acc4044ca0",
"text": "Plastics in the marine environment have become a major concern because of their persistence at sea, and adverse consequences to marine life and potentially human health. Implementing mitigation strategies requires an understanding and quantification of marine plastic sources, taking spatial and temporal variability into account. Here we present a global model of plastic inputs from rivers into oceans based on waste management, population density and hydrological information. Our model is calibrated against measurements available in the literature. We estimate that between 1.15 and 2.41 million tonnes of plastic waste currently enters the ocean every year from rivers, with over 74% of emissions occurring between May and October. The top 20 polluting rivers, mostly located in Asia, account for 67% of the global total. The findings of this study provide baseline data for ocean plastic mass balance exercises, and assist in prioritizing future plastic debris monitoring and mitigation strategies.",
"title": ""
},
{
"docid": "e3853e259c3ae6739dcae3143e2074a8",
"text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.",
"title": ""
},
{
"docid": "f160dd844c54dafc8c5265ff0e4d4a05",
"text": "The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all stakeholders in m-payment ecosystem. In this paper, the STOF business model framework is employed to analyze m-payment services from the point of view of one of the key players in the ecosystem i.e., banks. We apply Analytic Hierarchy Process (AHP) method to analyze the critical design issues for four domains of the STOF model. The results of the analysis show that service domain is the most important, followed by technology, organization and finance domains. Security related issues are found to be the most important by bank representatives. The future research can be extended to the m-payment ecosystem by collecting data from different actors from the ecosystem.",
"title": ""
},
{
"docid": "f3d0ae1db485b95b8b6931f8c6f2ea40",
"text": "Spoken language understanding (SLU) is a core component of a spoken dialogue system. In the traditional architecture of dialogue systems, the SLU component treats each utterance independent of each other, and then the following components aggregate the multi-turn information in the separate phases. However, there are two challenges: 1) errors from previous turns may be propagated and then degrade the performance of the current turn; 2) knowledge mentioned in the long history may not be carried into the current turn. This paper addresses the above issues by proposing an architecture using end-to-end memory networks to model knowledge carryover in multi-turn conversations, where utterances encoded with intents and slots can be stored as embeddings in the memory and the decoding phase applies an attention model to leverage previously stored semantics for intent prediction and slot tagging simultaneously. The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.",
"title": ""
},
{
"docid": "b2283fb23a199dbfec42b76dec31ac69",
"text": "High accurate indoor localization and tracking of smart phones is critical to pervasive applications. Most radio-based solutions either exploit some error prone power-distance models or require some labor-intensive process of site survey to construct RSS fingerprint database. This study offers a new perspective to exploit RSS readings by their contrast relationship rather than absolute values, leading to three observations and functions called turn verifying, room distinguishing and entrance discovering. On this basis, we design WaP (WiFi-Assisted Particle filter), an indoor localization and tracking system exploiting particle filters to combine dead reckoning, RSS-based analyzing and knowledge of floor plan together. All the prerequisites of WaP are the floor plan and the coarse locations on which room the APs reside. WaP prototype is realized on off-the-shelf smartphones with limited particle number typically 400, and validated in a college building covering 1362m2. Experiment results show that WaP can achieve average localization error of 0.71m for 100 trajectories by 8 pedestrians.",
"title": ""
},
{
"docid": "10634117fd51d94f9b12b9f0ed034f65",
"text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.",
"title": ""
},
{
"docid": "1840d879044662bfb1e6b2ea3ee9c2c8",
"text": "Working memory (WM) training has been reported to benefit abilities as diverse as fluid intelligence (Jaeggi et al., Proceedings of the National Academy of Sciences of the United States of America, 105:6829-6833, 2008) and reading comprehension (Chein & Morrison, Psychonomic Bulletin & Review, 17:193-199, 2010), but transfer is not always observed (for reviews, see Morrison & Chein, Psychonomics Bulletin & Review, 18:46-60, 2011; Shipstead et al., Psychological Bulletin, 138:628-654, 2012). In contrast, recent WM training studies have consistently reported improvement on the trained tasks. The basis for these training benefits has received little attention, however, and it is not known which WM components and/or processes are being improved. Therefore, the goal of the present study was to investigate five possible mechanisms underlying the effects of adaptive dual n-back training on working memory (i.e., improvements in executive attention, updating, and focus switching, as well as increases in the capacity of the focus of attention and short-term memory). In addition to a no-contact control group, the present study also included an active control group whose members received nonadaptive training on the same task. All three groups showed significant improvements on the n-back task from pretest to posttest, but adaptive training produced larger improvements than did nonadaptive training, which in turn produced larger improvements than simply retesting. Adaptive, but not nonadaptive, training also resulted in improvements on an untrained running span task that measured the capacity of the focus of attention. No other differential improvements were observed, suggesting that increases in the capacity of the focus of attention underlie the benefits of adaptive dual n-back training.",
"title": ""
}
] |
scidocsrr
|
df343f7c434386cbe83582a84e00fc2a
|
On Feature Matching and Image Registration for Two-dimensional Forward-scan Sonar Imaging
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
}
] |
[
{
"docid": "f1e5f8ab0b2ce32553dd5e08f1113b36",
"text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.",
"title": ""
},
{
"docid": "d69a8dde296d21f4e3334f436deefdf1",
"text": "In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/ facebookresearch/VideoPose3D",
"title": ""
},
{
"docid": "68bb5cb195c910e0a52c81a42a9e141c",
"text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.",
"title": ""
},
{
"docid": "473f51629f0267530a02472fb1e5b7ac",
"text": "It has been widely reported that a large number of ERP implementations fail to meet expectations. This is indicative, firstly, of the magnitude of the problems involved in ERP systems implementation and, secondly, of the importance of the ex-ante evaluation and selection process of ERP software. This paper argues that ERP evaluation should extend its scope beyond operational improvements arising from the ERP software/product per se to the strategic impact of ERP on the competitive position of the organisation. Due to the complexity of ERP software, the intangible nature of both costs and benefits, which evolve over time, and the organisational, technological and behavioural impact of ERP, a broad perspective of the ERP systems evaluation process is needed. The evaluation has to be both quantitative and qualitative and requires an estimation of the perceived costs and benefits throughout the life-cycle of ERP systems. The paper concludes by providing a framework of the key issues involved in the selection process of ERP software and the associated costs and benefits. European Journal of Information Systems (2001) 10, 204–215.",
"title": ""
},
{
"docid": "4a80d4ecb00fd27b29f342794213fc41",
"text": "Rapid and accurate analysis of platelet count plays an important role in evaluating hemorrhagic status. Therefore, we evaluated platelet counting performance of a hematology analyzer, Celltac F (MEK-8222, Nihon Kohden Corporation, Tokyo, Japan), that features easy use with low reagent consumption and high throughput while occupying minimal space in the clinical laboratory. All blood samples were anticoagulated with dipotassium ethylenediaminetetraacetic acid (EDTA-2K). The samples were stored at room temperature (18(;)C-22(;)C) and tested within 4 hours of phlebotomy. We evaluated the counting ability of the Celltac F hematology analyzer by comparing it with the platelet counts obtained by the flow cytometry method that ISLH and ICSH recommended, and also the manual visual method by Unopette (Becton Dickinson Vacutainer Systems). The ICSH/ISLH reference method is based on the fact that platelets can be stained with monoclonal antibodies to CD41 and/or CD61. The dilution ratio was optimized after the precision, coincidence events, and debris counts were confirmed by the reference method. Good correlation of platelet count between the Celltac F and the ICSH/ISLH reference method (r = 0.99, and the manual visual method (r= 0.93) were obtained. The regressions were y = 0.90 x+9.0 and y=1.11x+8.4, respectively. We conclude that the Celltac F hematology analyzer for platelet counting was well suited to the ICSH/ISLH reference method for rapidness and reliability.",
"title": ""
},
{
"docid": "dd6b922a2cced45284cd1c67ad3be247",
"text": "Today’s interconnected socio-economic and environmental challenges require the combination and reuse of existing integrated modelling solutions. This paper contributes to this overall research area, by reviewing a wide range of currently available frameworks, systems and emerging technologies for integrated modelling in the environmental sciences. Based on a systematic review of the literature, we group related studies and papers into viewpoints and elaborate on shared and diverging characteristics. Our analysis shows that component-based modelling frameworks and scientific workflow systems have been traditionally used for solving technical integration challenges, but ultimately, the appropriate framework or system strongly depends on the particular environmental phenomenon under investigation. The study also shows that in general individual integrated modelling solutions do not benefit from components and models that are provided by others. It is this island (or silo) situation, which results in low levels of model reuse for multi-disciplinary settings. This seems mainly due to the fact that the field as such is highly complex and diverse. A unique integrated modelling solution, which is capable of dealing with any environmental scenario, seems to be unaffordable because of the great variety of data formats, models, environmental phenomena, stakeholder networks, user perspectives and social aspects. Nevertheless, we conclude that the combination of modelling tools, which address complementary viewpoints such as service-based combined with scientific workflow systems, or resource-modelling on top of virtual research environments could lead to sustainable information systems, which would advance model sharing, reuse and integration. Next steps for improving this form of multi-disciplinary interoperability are sketched.",
"title": ""
},
{
"docid": "f418441593da8db1dcbaa922cccc21fa",
"text": "Sentiment analysis, as a heatedly-discussed research topic in the area of information extraction, has attracted more attention from the beginning of this century. With the rapid development of the Internet, especially the rising popularity of Web2.0 technology, network user has become not only the content maker, but also the receiver of information. Meanwhile, benefiting from the development and maturity of the technology in natural language processing and machine learning, we can widely employ sentiment analysis on subjective texts. In this paper, we propose a supervised learning method on fine-grained sentiment analysis to meet the new challenges by exploring new research ideas and methods to further improve the accuracy and practicability of sentiment analysis. First, this paper presents an improved strength computation method of sentiment word. Second, this paper introduces a sentiment information joint recognition model based on Conditional Random Fields and analyzes the related knowledge of the basic and semantic features. Finally, the experimental results show that our approach and a demo system are feasible and effective.",
"title": ""
},
{
"docid": "a73275f83b94ee3fb1675a125edbb55a",
"text": "Treatment of biowaste, the predominant waste fraction in lowand middle-income settings, offers public health, environmental and economic benefits by converting waste into a hygienic product, diverting it from disposal sites, and providing a source of income. This article presents a comprehensive overview of 13 biowaste treatment technologies, grouped into four categories: (1) direct use (direct land application, direct animal feed, direct combustion), (2) biological treatment (composting, vermicomposting, black soldier fly treatment, anaerobic digestion, fermentation), (3) physico-chemical treatment (transesterification, densification), and (4) thermo-chemical treatment (pyrolysis, liquefaction, gasification). Based on a literature review and expert consultation, the main feedstock requirements, process conditions and treatment products are summarized, and the challenges and trends, particularly regarding the applicability of each technology in the urban lowand middle-income context, are critically discussed. An analysis of the scientific articles published from 2005 to 2015 reveals substantial differences in the amount and type of research published for each technology, a fact that can partly be explained with the development stage of the technologies. Overall, publications from case studies and field research seem disproportionately underrepresented for all technologies. One may argue that this reflects the main task of researchers—to conduct fundamental research for enhanced process understanding—but it may also be a result of the traditional embedding of the waste sector in the discipline of engineering science, where socio-economic and management aspects are seldom object of the research. More unbiased, wellstructured and reproducible evidence from case studies at scale could foster the knowledge transfer to practitioners and enhance the exchange between academia, policy and practice.",
"title": ""
},
{
"docid": "ba94bc5f5762017aed0c307ce89c0558",
"text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.",
"title": ""
},
{
"docid": "422caa6ceb9713bee7ebfb64f9c46b8f",
"text": "he persistence of illegal activity throughout human history and some of its apparent regularities have long attracted the attention of economists. For example, Adam Smith (1776 [1937], p. 670) observed that crime and the demand for protection from crime are both motivated by the accumulation of property. William Paley (1785 [1822]) presented a penetrating analysis of factors responsible for differences in the actual magnitudes of probability and severity of sanctions for different crimes. Jeremy Bentham, the father of utilitarianism, focused considerable attention on the calculus of both offenders' behavior and the optimal response by the legal authorities. It was not until the late 1960s, however, that economists reconnected with the subject, using modern economic analysis.' In this paper I shall focus on two of the main themes that characterize the literature on crime in the last three decades. The first is the evolution of a \"market model\" that offers a comprehensive framework for studying the problem. Like the classical approach, the model builds on the assumption that offenders, as members of the human race, respond to incentives. Of course, not every single offender does so. But willful engagement in even the most reprehensible violations of legal and moral codes does not preclude an ability to make self-serving choices, and this has been the justification for applying economic analysis to all illegal activities, from speeding and tax evasion to murder.",
"title": ""
},
{
"docid": "9dd245f75092adc8d8bb2b151275789b",
"text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"title": ""
},
{
"docid": "3867ff9ac24349b17e50ec2a34e84da4",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "b0e94a0fdaf280d9e1942befdc4ac660",
"text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the hand-eye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of Daniilid is 1999 (IJRR) for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "b819c10fb84e576cb6444023246b91b0",
"text": "BCAAs (leucine, isoleucine, and valine), particularly leucine, have anabolic effects on protein metabolism by increasing the rate of protein synthesis and decreasing the rate of protein degradation in resting human muscle. Also, during recovery from endurance exercise, BCAAs were found to have anabolic effects in human muscle. These effects are likely to be mediated through changes in signaling pathways controlling protein synthesis. This involves phosphorylation of the mammalian target of rapamycin (mTOR) and sequential activation of 70-kD S6 protein kinase (p70 S6 kinase) and the eukaryotic initiation factor 4E-binding protein 1. Activation of p70 S6 kinase, and subsequent phopsphorylation of the ribosomal protein S6, is associated with enhanced translation of specific mRNAs. When BCAAs were supplied to subjects during and after one session of quadriceps muscle resistance exercise, an increase in mTOR, p70 S6 kinase, and S6 phosphorylation was found in the recovery period after the exercise with no effect of BCAAs on Akt or glycogen synthase kinase 3 (GSK-3) phosphorylation. Exercise without BCAA intake led to a partial phosphorylation of p70 S6 kinase without activating the enzyme, a decrease in Akt phosphorylation, and no change in GSK-3. It has previously been shown that leucine infusion increases p70 S6 kinase phosphorylation in an Akt-independent manner in resting subjects; however, a relation between mTOR and p70 S6 kinase has not been reported previously. The results suggest that BCAAs activate mTOR and p70 S6 kinase in human muscle in the recovery period after exercise and that GSK-3 is not involved in the anabolic action of BCAAs on human muscle. J. Nutr. 136: 269S–273S, 2006.",
"title": ""
},
{
"docid": "d6ed9594536cada2d857a876fd9e21ae",
"text": "As the increasing growth of the computing technology and network technology, it also increases data storage demands. Data Security has become a crucial issue in electronic communication. Secret writing has come up as a solution, and plays a vital role in data security system. It uses some algorithms to scramble data into unreadable text which might be only being decrypted by party those having the associated key. These algorithms consume a major amount of computing resources such as memory and battery power and computation time. This paper accomplishes comparative analysis of encryption standards DES, AES and RSA considering various parameters such as computation time, memory usages. A cryptographic tool is used for performing experiments. Experiments results are given to analyses the effectiveness of symmetric and asymmetric algorithms. Keywords— Encryption, secret key encryption, public key encryption, DES, AES, RSA encryption, Symmetric",
"title": ""
},
{
"docid": "9a57dfbbd233c851ae972403c67c35d5",
"text": "It is well established that women’s preferences for masculinity are contingent on their own market-value and the duration of the sought relationship, but few studies have investigated similar effects in men. Here, we tested whether men’s attractiveness predicts their preferences for feminine face shape in women when judging for longand short-term relationship partners. We found that attractive men expressed a stronger preference for facial femininity compared to less attractive men. The relationship was evident when men judged women for a short-term, but not for a long-term, relationship. These findings suggest that market-value may influence men’s preferences for feminine characteristics in women’s faces and indicate that men’s preferences may be subject to facultative variation to a greater degree than was previously thought. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5457f45fa815a4d96b39e982b79836bd",
"text": "Liu, He M , Purdue University, August 2016. Image Quality Estimation: Soft-ware for Objective Evaluation . Major Professor: Amy R. Reibman. Digital images are widely used in our daily lives and the quality of images is important to the viewing experience. Low quality images may be blurry or contain noise or compression artifacts. Humans can easily estimate image quality, but it is not practical to use human subjects to measure image quality in real applications. Image Quality Estimators (QE) are algorithms that evaluate image qualities automatically. These QEs compute scores of any input images to represent their qualities. This thesis mainly focuses on evaluating the performance of QEs. Two approaches used in this work are objective software analysis and the subjective database design. For the first, we create a software consisting of functional modules to test QE performances. These modules can load images from subjective databases or generate distortion images from any input images. Their QE scores are computed and analyzed by the statistical method module so that they can be easily interpreted and reported. Some modules in this software are combined and formed into a published software package: Stress Testing Image Quality Estimators (STIQE). In addition to the QE analysis software, a new subjective database is designed and implemented using both online and in-lab subjective tests. The database is designed using the pairwise comparison method and the subjective quality scores are computed using the Bradley-Terry model and Maximum Likelihood Estimation (MLE). While four testing phases are designed for this databases, only phase 1 is reported in this",
"title": ""
},
{
"docid": "d76d09ca1e87eb2e08ccc03428c62be0",
"text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.",
"title": ""
},
{
"docid": "bb75aa9bbe07e635493b123eaaadf74d",
"text": "Right ventricular (RV) pacing increases the incidence of atrial fibrillation (AF) and hospitalization rate for heart failure. Many patients with sinus node dysfunction (SND) are implanted with a DDDR pacemaker to ensure the treatment of slowly conducted atrial fibrillation and atrioventricular (AV) block. Many pacemakers are never reprogrammed after implantation. This study aims to evaluate the effectiveness of programming DDIR with a long AV delay in patients with SND and preserved AV conduction as a possible strategy to reduce RV pacing in comparison with a nominal DDDR setting including an AV search hysteresis. In 61 patients (70 ± 10 years, 34 male, PR < 200 ms, AV-Wenckebach rate at ≥130 bpm) with symptomatic SND a DDDR pacemaker was implanted. The cumulative prevalence of right ventricular pacing was assessed according to the pacemaker counter in the nominal DDDR-Mode (AV delay 150/120 ms after atrial pacing/sensing, AV search hysteresis active) during the first postoperative days and in DDIR with an individually programmed long fixed AV delay after 100 days (median). With the nominal DDDR mode the median incidence of right ventricular pacing amounted to 25.2%, whereas with DDIR and long AV delay the median prevalence of RV pacing was significantly reduced to 1.1% (P < 0.001). In 30 patients (49%) right ventricular pacing was almost completely (<1%) eliminated, n = 22 (36%) had >1% <20% and n = 4 (7%) had >40% right ventricular pacing. The median PR interval was 161 ms. The median AV interval with DDIR was 280 ms. The incidence of right ventricular pacing in patients with SND and preserved AV conduction, who are treated with a dual chamber pacemaker, can significantly be reduced by programming DDIR with a long, individually adapted AV delay when compared with a nominal DDDR setting, but nonetheless in some patients this strategy produces a high proportion of disadvantageous RV pacing. The DDIR mode with long AV delay provides an effective strategy to reduce unnecessary right ventricular pacing but the effect has to be verified in every single patient.",
"title": ""
}
] |
scidocsrr
|
4a44cc2e398b3d487398599e64809a59
|
A Crowd Monitoring Framework using Emotion Analysis of Social Media for Emergency Management in Mass Gatherings
|
[
{
"docid": "b6260c8d87bdab38bbebb821def51f6b",
"text": "The understanding of crowd behaviour in semi-confined spaces is an important part of the design of new pedestrian facilities, for major layout modifications to existing areas and for the daily management of sites subject to crowd traffic. Conventional manual measurement techniques are not suitable for comprehensive data collection of patterns of site occupation and movement. Real-time monitoring is tedious and tiring, but safety-critical. This article presents some image processing techniques which, using existing closed-circuit television systems, can support both data collection and on-line monitoring of crowds. The application of these methods could lead to a better understanding of crowd behaviour, improved design of the built environment and increased pedestrian safety.",
"title": ""
},
{
"docid": "49740b1faa60a212297926fec63de0ce",
"text": "In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problemempirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children’s fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a na ı̈ve baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "6fe435aa3a1efe01ef35575bd383efe5",
"text": "In order to survive in the present day global competitive environment, it now becomes essential for the manufacturing organizations to take prompt and correct decisions regarding effective use of their scarce resources. Various multi-criteria decision-making (MCDM) methods are now available to help those organizations in choosing the best decisive course of actions. In this paper, the applicability of weighted aggregated sum product assessment (WASPAS) method is explored as an effective MCDM tool while solving eight manufacturing decision making problems, such as selection of cutting fluid, electroplating system, forging condition, arc welding process, industrial robot, milling condition, machinability of materials, and electro-discharge micro-machining process parameters. It is observed that this method has the capability of accurately ranking the alternatives in all the considered selection problems. The effect of the parameter λ on the ranking performance of WASPAS method is also studied.",
"title": ""
},
{
"docid": "eab5044761dabda84529fc41fb6022ba",
"text": "Fundamental frequency (f0) estimation from polyphonic music includes the tasks of multiple-f0, melody, vocal, and bass line estimation. Historically these problems have been approached separately, and only recently, using learning-based approaches. We present a multitask deep learning architecture that jointly estimates outputs for various tasks including multiplef0, melody, vocal and bass line estimation, and is trained using a large, semi-automatically annotated dataset. We show that the multitask model outperforms its single-task counterparts, and explore the effect of various design decisions in our approach, and show that it performs better or at least competitively when compared against strong baseline methods.",
"title": ""
},
{
"docid": "bd8b7b892060d8099217ef8553c79b71",
"text": "Purpose: The purpose of this study is to examine the barriers that SMEs are experiencing when confronted with the need to implement e-commerce to sustain their competitiveness. E-commerce is the medium that leads to economic growth of a country. Small and Medium Enterprises (SMEs) play an important role in contributing to the Gross Domestic Product and reducing the unemployment. However, there are some specific factors that inhibit the implementation of e-commerce among SMEs. Design/methodology/approach: A questionnaire approach was employed in this study and 160 questionnaires have been distributed but only 91usable questionnaires have been collected from SMEs. Literature found that main barriers to e-commerce adoption among SMEs are organizational barriers, financial barriers, technical barriers, legal and regulatory barriers, and behavioral barriers. Findings: Of this study showed that all these barriers carried an average influence on ecommerce adoption. The most important factor barriers of e-commerce adoption are legal and regulatory barriers followed by technical barriers, whereas lack of internet security is the highest barrier factor that inhibits the implementation of e-commerce in SMEs followed by the requirement to undertake additional training and skill development. Practical implications: This paper is useful for the management of SMEs in understanding and gaining insights into the real and potential barriers to e-commerce adoption. This can help the organization to design strategy in taking up barriers tactfully to its advantage.",
"title": ""
},
{
"docid": "fe7668dd82775cf02116faacd1dd945f",
"text": "In the last years, the advent of unmanned aerial vehicles (UAVs) for civilian remote sensing purposes has generated a lot of interest because of the various new applications they can offer. One of them is represented by the automatic detection and counting of cars. In this paper, we propose a novel car detection method. It starts with a feature extraction process based on scalar invariant feature transform (SIFT) thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, the process discriminates between keypoints assigned to cars and those associated with all remaining objects by means of a support vector machine (SVM) classifier. Experimental results have been conducted on a real UAV scene. They show how the proposed method allows providing interesting detection performances.",
"title": ""
},
{
"docid": "5523695d47205129d0e5f6916d2d14f1",
"text": "A phenomenal growth in the number of credit card transactions, especially for online purchases, has recently led to a substantial rise in fraudulent activities. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. In real life, fraudulent transactions are interspersed with genuine transactions and simple pattern matching is not often sufficient to detect them accurately. Thus, there is a need for combining both anomaly detection as well as misuse detection techniques. In this paper, we propose to use two-stage sequence alignment in which a profile analyzer (PA) first determines the similarity of an incoming sequence of transactions on a given credit card with the genuine cardholder's past spending sequences. The unusual transactions traced by the profile analyzer are next passed on to a deviation analyzer (DA) for possible alignment with past fraudulent behavior. The final decision about the nature of a transaction is taken on the basis of the observations by these two analyzers. In order to achieve online response time for both PA and DA, we suggest a new approach for combining two sequence alignment algorithms BLAST and SSAHA.",
"title": ""
},
{
"docid": "5ff263cf4a73c202741c46d5582a960a",
"text": "Sentiment analysis; Sentiment classification; Feature selection; Emotion detection; Transfer learning; Building resources Abstract Sentiment Analysis (SA) is an ongoing field of research in text mining field. SA is the computational treatment of opinions, sentiments and subjectivity of text. This survey paper tackles a comprehensive overview of the last update in this field. Many recently proposed algorithms’ enhancements and various SA applications are investigated and presented briefly in this survey. These articles are categorized according to their contributions in the various SA techniques. The related fields to SA (transfer learning, emotion detection, and building resources) that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of SA techniques and the related fields with brief details. The main contributions of this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. 2014 Production and hosting by Elsevier B.V. on behalf of Ain Shams University.",
"title": ""
},
{
"docid": "c55afb93606ddb88f0a9274f06eca68b",
"text": "Social media use continues to grow and is especially prevalent among young adults. It is surprising then that, in spite of this enhanced interconnectivity, young adults may be lonelier than other age groups, and that the current generation may be the loneliest ever. We propose that only image-based platforms (e.g., Instagram, Snapchat) have the potential to ameliorate loneliness due to the enhanced intimacy they offer. In contrast, text-based platforms (e.g., Twitter, Yik Yak) offer little intimacy and should have no effect on loneliness. This study (N 1⁄4 253) uses a mixed-design survey to test this possibility. Quantitative results suggest that loneliness may decrease, while happiness and satisfaction with life may increase, as a function of image-based social media use. In contrast, text-based media use appears ineffectual. Qualitative results suggest that the observed effects may be due to the enhanced intimacy offered by imagebased (versus text-based) social media use. © 2016 Published by Elsevier Ltd. “The more advanced the technology, on the whole, the more possible it is for a considerable number of human beings to imagine being somebody else.” -sociologist David Riesman.",
"title": ""
},
{
"docid": "ff26c01e6248882ba26b348bcb783913",
"text": "Data warehouses and data marts have long been considered as the unique solution for providing end-users with decisional information. More recently, data lakes have been proposed in order to govern data swamps. However, no formal definition has been proposed in the literature. Existing works are not complete and miss important parts of the topic. In particular, they do not focus on the influence of the data gravity, the infrastructure role of those solutions and of course are proposing divergent definitions and positioning regarding the usage and the interaction with existing decision support system.\n In this paper, we propose a novel definition of data lakes, together with a comparison with other over several criteria as the way to populate them, how to use, what is the Data Lake end user profile. We claim that data lakes are complementary components in decisional information systems and we discuss their position and interactions regarding the other components by proposing an interaction model.",
"title": ""
},
{
"docid": "89a9293fb0fcac7d55cfb44a8032ce71",
"text": "Traditional spectral clustering methods cannot naturally learn the number of communities in a network and often fail to detect smaller community structure in dense networks because they are based upon external community connectivity properties such as graph cuts. We propose an algorithm for detecting community structure in networks called the leader-follower algorithm which is based upon the natural internal structure expected of communities in social networks. The algorithm uses the notion of network centrality in a novel manner to differentiate leaders (nodes which connect different communities) from loyal followers (nodes which only have neighbors within a single community). Using this approach, it is able to naturally learn the communities from the network structure and does not require the number of communities as an input, in contrast to other common methods such as spectral clustering. We prove that it will detect all of the communities exactly for any network possessing communities with the natural internal structure expected in social networks. More importantly, we demonstrate the effectiveness of the leader-follower algorithm in the context of various real networks ranging from social networks such as Facebook to biological networks such as an fMRI based human brain network. We find that the leader-follower algorithm finds the relevant community structure in these networks without knowing the number of communities beforehand. Also, because the leader-follower algorithm detects communities using their internal structure, we find that it can resolve a finer community structure in dense networks than common spectral clustering methods based on external community structure.",
"title": ""
},
{
"docid": "5d9b29c10d878d288a960ae793f2366e",
"text": "We propose a new bandgap reference topology for supply voltages as low as one diode drop (~0.8V). In conventional low-voltage references, supply voltage is limited by the generated reference voltage. Also, the proposed topology generates the reference voltage at the output of the feedback amplifier. This eliminates the need for an additional output buffer, otherwise required in conventional topologies. With the bandgap core biased from the reference voltage, the new topology is also suitable for a low-voltage shunt reference. We fabricated a 1V, 0.35mV/degC reference occupying 0.013mm2 in a 90nm CMOS process",
"title": ""
},
{
"docid": "c117da74c302d9e108970854d79e54fd",
"text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.",
"title": ""
},
{
"docid": "e348191d7cebf51e9df66b99f71031bd",
"text": "7 Give all citizens a modest, yet unconditional income, and let them top it up at will with income from other sources. This exceedingly simple idea has a surprisingly diverse pedigree. In the course of the last two centuries, it has been independently thought up under a variety of names – “territorial dividend” and “state bonus,” for example, “demogrant” and “citizen’s wage,” “universal benefit” and “basic income” – in most cases without much success. In the late sixties and early seventies, it enjoyed a sudden popularity in the United States and was even put forward by a presidential candidate, but it was soon shelved and just about forgotten. In the last two decades, however, it has gradually become the subject of an unprecedented and fast expanding public discussion throughout the European Union. Some see it as a crucial remedy for many social ills, including unemployment and poverty. Others denounce it as a crazy, economically flawed, ethically objectionable proposal, to be forgotten as soon as possible, to be dumped once and for all into the dustbin of the history of ideas. To shed light on this debate, I start off saying more about what basic income is and what it is not, and about what distinguishes it from existing guaranteed income schemes. On this background, it 1",
"title": ""
},
{
"docid": "87a296ad9c3dd7b32b7ed876b9132fb2",
"text": "Reservoir Computing is an attractive paradigm of recurrent neural network architecture, due to the ease of training and existing neuromorphic implementations. Successively applied on speech recognition and time series forecasting, few works have so far studied the behavior of such networks on computer vision tasks. Therefore we decided to investigate the ability of Echo State Networks to classify the digits of the MNIST database. We show that even if ESNs are not able to outperform state-of-the-art convolutional networks, they allow low error thanks to a suitable preprocessing of images. The best performance is obtained with a large reservoir of 4,000~neurons, but committees of smaller reservoirs are also appealing and might be further investigated.",
"title": ""
},
{
"docid": "3533e733f0d418a0be1ec4af7e7740aa",
"text": "Visual depiction of the structure and evolution of science has been proposed as a key strategy for dealing with the large, complex, and increasingly interdisciplinary records of scientific communication. While every such visualization assumes the existence of spatial structures within the system of science, new methods and tools are rarely linked to thorough reflection on the underlying spatial concepts. Meanwhile, geographic information science has adopted a view of geographic space as conceptualized through the duality of discrete objects and continuous fields. This paper argues that conceptualization of science has been dominated by a view of its constituent elements (e.g., authors, articles, journals, disciplines) as discrete objects. It is proposed that, like in geographic information science, alternative concepts could be used for the same phenomenon. For example, one could view an author as either a discrete object at a specific location or as a continuous field occupying all of a discipline. It is further proposed that this duality of spatial concepts can extend to the methods by which low-dimensional geometric models of high-dimensional scientific spaces are created and used. This can result in new methods revealing different kinds of insights. This is demonstrated by a juxtaposition of two visualizations of an author’s intellectual evolution on the basis of either a discrete or continuous conceptualization. © 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c57911c03df15837a800ec491f7ca597",
"text": "This paper presents a novel unifying framework of anytime sparse Gaussian process regression (SGPR) models that can produce good predictive performance fast and improve their predictive performance over time. Our proposed unifying framework reverses the variational inference procedure to theoretically construct a non-trivial, concave functional that is maximized at the predictive distribution of any SGPR model of our choice. As a result, a stochastic natural gradient ascent method can be derived that involves iteratively following the stochastic natural gradient of the functional to improve its estimate of the predictive distribution of the chosen SGPR model and is guaranteed to achieve asymptotic convergence to it. Interestingly, we show that if the predictive distribution of the chosen SGPR model satisfies certain decomposability conditions, then the stochastic natural gradient is an unbiased estimator of the exact natural gradient and can be computed in constant time (i.e., independent of data size) at each iteration. We empirically evaluate the trade-off between the predictive performance vs. time efficiency of the anytime SGPR models on two real-world million-sized datasets.",
"title": ""
},
{
"docid": "a2bd543446fb86da6030ce7f46db9f75",
"text": "This paper presents a risk assessment algorithm for automatic lane change maneuvers on highways. It is capable of reliably assessing a given highway situation in terms of the possibility of collisions and robustly giving a recommendation for lane changes. The algorithm infers potential collision risks of observed vehicles based on Bayesian networks considering uncertainties of its input data. It utilizes two complementary risk metrics (time-to-collision and minimal safety margin) in temporal and spatial aspects to cover all risky situations that can occur for lane changes. In addition, it provides a robust recommendation for lane changes by filtering out uncertain noise data pertaining to vehicle tracking. The validity of the algorithm is tested and evaluated on public highways in real traffic as well as a closed high-speed test track in simulated traffic through in-vehicle testing based on overtaking and overtaken scenarios in order to demonstrate the feasibility of the risk assessment for automatic lane change maneuvers on highways.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "dbf00d8127fc49834f8ed47a69658d4e",
"text": "We present a simple model for the surface of the ocean, suitable for the modeling and rendering of most common waves where the disturbing force is from the wind and the restoring force from gravity.It is based on the Gerstner, or Rankine, model where particles of water describe circular or elliptical stationary orbits. The model can easily produce realistic waves shapes which are varied according to the parameters of the orbits. The surface of the ocean floor affects the refraction and the breaking of waves on the shore. The model can also determine the position, direction, and speed of breakers.The ocean surface is modeled as a parametric surface, permitting the use of traditional rendering methods, including ray-tracing and adaptive subdivision. Animation is easy, since time is built into the model. The foam generated by the breakers is modeled by particle systems whose direction, speed and life expectancy is given by the surface model.To give designers control over the shape of the ocean, the model of the overall surface includes multiple trains of waves, each with its own set of parameters and optional stochastic elements. The overall \"randomness\" and \"short-crestedness\" of the ocean is achieved by a combination of small variations within a train and large variations between trains.Rendered examples of oceans waves generated by the model are given and a 10 second animation is described.",
"title": ""
},
{
"docid": "830f36268b9220d378d9aafaf52f5144",
"text": "Deep Convolutional Neural Networks (DCNNs) achieve invariance to domain transformations (deformations) by using multiple `max-pooling' (MP) layers. In this work we show that alternative methods of modeling deformations can improve the accuracy and efficiency of DCNNs. First, we introduce epitomic convolution as an alternative to the common convolution-MP cascade of DCNNs, that comes with the same computational cost but favorable learning properties. Second, we introduce a Multiple Instance Learning algorithm to accommodate global translation and scaling in image classification, yielding an efficient algorithm that trains and tests a DCNN in a consistent manner. Third we develop a DCNN sliding window detector that explicitly, but efficiently, searches over the object's position, scale, and aspect ratio. We provide competitive image classification and localization results on the ImageNet dataset and object detection results on Pascal VOC2007.",
"title": ""
}
] |
scidocsrr
|
607048a795d01591be1876e687ee657c
|
A Study of Reinforcement Learning for Neural Machine Translation
|
[
{
"docid": "0f699e9f14753b2cbfb7f7a3c7057f40",
"text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1",
"title": ""
}
] |
[
{
"docid": "97e80e728b53042d6e9962dd03b5df87",
"text": "(1) is an example of an adjectival comparative. In it, the adjective important is flanked by more and a comparative clause headed by than. This article is a survey of recent ideas about the interpretation of comparatives, including (i) the underlying semantics based on the idea of a threshold; (ii) the interpretation of comparative clauses that include quantifiers (brighter than on many other days); (iii) remarks on differentials such as much in (1) above: what they do in the comparative and what they do elsewhere in the language; (iv) the relationship between comparatives and other Degree constructions (e.g. as important, too important); and (v) the types of phrases in which comparatives are found (adjective: tighter versus noun: more water). Given the nature and purpose of this essay, I have tried not to presuppose background in formal semantics and I have departed from standard practice in journal articles by, as much as possible, not interrupting the flow with footnotes and references. There are two appendices. The first provides more analytical detail and there I do rely on formal techniques of natural language semantics. The second covers the sources for the ideas surveyed here.",
"title": ""
},
{
"docid": "408d3db3b2126990611fdc3a62a985ea",
"text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "e62daef8b5273096e0f174c73e3674a8",
"text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "3427740a87691629bd6cf97792089f62",
"text": "Maintainers face the daunting task of wading through a collection of both new and old revisions, trying to ferret out revisions which warrant personal inspection. One can rank revisions by size/lines of code (LOC), but often, due to the distribution of the size of changes, revisions will be of similar size. If we can't rank revisions by LOC perhaps we can rank by Halstead's and McCabe's complexity metrics? However, these metrics are problematic when applied to code fragments (revisions) written in multiple languages: special parsers are required which may not support the language or dialect used; analysis tools may not understand code fragments. We propose using the statistical moments of indentation as a lightweight, language independent, revision/diff friendly metric which actually proxies classical complexity metrics. We have extensively evaluated our approach against the entire CVS histories of the 278 of the most popular and most active SourceForge projects. We found that our results are linearly correlated and rank-correlated with traditional measures of complexity, suggesting that measuring indentation is a cheap and accurate proxy for code complexity of revisions. Thus ranking revisions by the standard deviation and summation of indentation will be very similar to ranking revisions by complexity.",
"title": ""
},
{
"docid": "b50498964a73a59f54b3a213f2626935",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "11d06fb5474df44a6bc733bd5cd1263d",
"text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.",
"title": ""
},
{
"docid": "058a4f93fb5c24c0c9967fca277ee178",
"text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.",
"title": ""
},
{
"docid": "1463e545177c0ad5ab87c394b504b1ee",
"text": "The term Cyber-Physical Systems (CPS) typically refers to engineered, physical and biological systems monitored and/or controlled by an embedded computational core. The behaviour of a CPS over time is generally characterised by the evolution of physical quantities, and discrete software and hardware states. In general, these can be mathematically modelled by the evolution of continuous state variables for the physical components interleaved with discrete events. Despite large effort and progress in the exhaustive verification of such hybrid systems, the complexity of CPS models limits formal verification of safety of their behaviour only to small instances. An alternative approach, closer to the practice of simulation and testing, is to monitor and to predict CPS behaviours at simulation-time or at runtime. In this chapter, we summarise the state-of-the-art techniques for qualitative and quantitative monitoring of CPS behaviours. We present an overview of some of the important applications and, finally, we describe the tools supporting CPS monitoring and compare their main features.",
"title": ""
},
{
"docid": "1f45397efcbe3db84f45a1498267593c",
"text": "Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a set of scalar optimization subproblems and optimizes them in a collaborative manner. Subproblems and solutions are two sets of agents that naturally exist in MOEA/D. The selection of promising solutions for subproblems can be regarded as a matching between subproblems and solutions. Stable matching, proposed in economics, can effectively resolve conflicts of interests among selfish agents in the market. In this paper, we advocate the use of a simple and effective stable matching (STM) model to coordinate the selection process in MOEA/D. In this model, subproblem agents can express their preferences over the solution agents, and vice versa. The stable outcome produced by the STM model matches each subproblem with one single solution, and it tradeoffs convergence and diversity of the evolutionary search. Comprehensive experiments have shown the effectiveness and competitiveness of our MOEA/D algorithm with the STM model. We have also demonstrated that user-preference information can be readily used in our proposed algorithm to find a region that decision makers are interested in.",
"title": ""
},
{
"docid": "c721a66169e3ded24c814b16604855f2",
"text": "When it comes to smart cities, one of the most important components is data. To enable smart city applications, data needs to be collected, stored, and processed to accomplish intelligent tasks. In this paper we discuss smart cities and the use of new and existing technologies to improve multiple aspects of these cities. There are also social and environmental aspects that have become important in smart cities that create concerns regarding ethics and ethical conduct. Thus we discuss various issues relating to the appropriate and ethical use of smart city applications and their data. Many smart city projects are being implemented and here we showcase several examples to provide context for our ethical analysis. Law enforcement, structure efficiency, utility efficiency, and traffic flow control applications are some areas that could have the most gains in smart cities; yet, they are the most pervasive as the applications performing these activities must collect and process the most private data about the citizens. The secure and ethical use of this data must be a top priority within every project. The paper also provides a list of challenges for smart city applications pertaining in some ways to ethics. These challenges are drawn from the studied examples of smart city projects to bring attention to ethical issues and raise awareness of the need to address and regulate such use of data.",
"title": ""
},
{
"docid": "aec273859fedb6550c461548e9ab7c53",
"text": "In this paper, we describe our contribution for the NTCIR-13 Short Text Conversation (STC) Chinese task. Short text conversation remains an important part on social media gathering much attention recently. The task aims to retrieve or generate a relevant comment given a post. We consider both closed and open domain STC for retrieval–based and generation-based track. To be more specific, the former applies a retrieval-based approach from the given corpus, while the later utilizes the Web to fulfill the generation-based track. Evaluation results show that our retrieval–based approach performs better than the generation-based one.",
"title": ""
},
{
"docid": "8bd0c280a95f549bd5596fb1f7499e44",
"text": "Mobile devices are becoming ubiquitous. People take pictures via their phone cameras to explore the world on the go. In many cases, they are concerned with the picture-related information. Understanding user intent conveyed by those pictures therefore becomes important. Existing mobile applications employ visual search to connect the captured picture with the physical world. However, they only achieve limited success due to the ambiguity nature of user intent in the picture-one picture usually contains multiple objects. By taking advantage of multitouch interactions on mobile devices, this paper presents a prototype of interactive mobile visual search, named TapTell, to help users formulate their visual intent more conveniently. This kind of search leverages limited yet natural user interactions on the phone to achieve more effective visual search while maintaining a satisfying user experience. We make three contributions in this work. First, we conduct a focus study on the usage patterns and concerned factors for mobile visual search, which in turn leads to the interactive design of expressing visual intent by gesture. Second, we introduce four modes of gesture-based interactions (crop, line, lasso, and tap) and develop a mobile prototype. Third, we perform an in-depth usability evaluation on these different modes, which demonstrates the advantage of interactions and shows that lasso is the most natural and effective interaction mode. We show that TapTell provides a natural user experience to use phone camera and gesture to explore the world. Based on the observation and conclusion, we also suggest some design principles for interactive mobile visual search in the future.",
"title": ""
},
{
"docid": "e9a46aa0c797520a9b192fc5607b3521",
"text": "A common setting for novelty detection assumes that labeled xamples from the nominal class are available, but that labeled examples of novelties are un available. The standard (inductive) approach is to declare novelties where the nominal density is l ow, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning tim e. We argue that novelty detection in this semi-supervised setting is naturally solved by a gener al r duction to a binary classification problem. In particular, a detector with a desired false posi tive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the induc tive approach, semi-supervised novelty detection (SSND) yields detectors that are optimal (e.g., s tatistically consistent) regardless of the distribution on novelties. Therefore, in novelty detectio n, unlabeled data have a substantial impact on the theoretical properties of the decision rule. We valid ate the practical utility of SSND with an extensive experimental study. We also show that SSND provides distribution-free, learnin g-theoretic solutions to two well known problems in hypothesis testing. First, our results pr ovide a general solution to the general two-sample problem, that is, the problem of determining whe ther two random samples arise from the same distribution. Second, a specialization of SSND coi ncides with the standard p-value approach to multiple testing under the so-called random effec ts model. Unlike standard rejection regions based on thresholded p-values, the general SSND framework allows for adaptation t o arbitrary alternative distributions in multiple dimensions.",
"title": ""
},
{
"docid": "d593c18bf87daa906f83d5ff718bdfd0",
"text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction",
"title": ""
},
{
"docid": "870ac1e223cc937e5f4416c9b2ee4a89",
"text": "Effective weed control, using either mechanical or chemical means, relies on knowledge of the crop and weed plant occurrences in the field. This knowledge can be obtained automatically by analyzing images collected in the field. Many existing methods for plant detection in images make the assumption that plant foliage does not overlap. This assumption is often violated, reducing the performance of existing methods. This study overcomes this issue by training a convolutional neural network to create a pixel-wise classification of crops, weeds and soil in RGB images from fields, in order to know the exact position of the plants. This training is based on simulated top-down images of weeds and maize in fields. The results show an pixel accuracy over 94% and a 100% detection rate of both maize and weeds, when tested on real images, while a high intersection over union is kept. The system can handle 2.4 images per second for images with a resolution of 1MPix, when using an Nvidia Titan X GPU.",
"title": ""
},
{
"docid": "eaec68f19fc5a168d5bee7b359ae2789",
"text": "New technology in knowledge discovery and data mining (KDD) make it possible to extract valuable information from operational data. Private businesses already use the technology for better management, planning, and marketing. Social welfare government agencies have a wealth of information about the experiences of families and individuals that are the most needy in our society in their administrative databases. These data too can be mined and analyzed with proper application of KDD technology. Such social science research could be priceless for better welfare program administration, program evaluation, and policy analysis. In this paper, we discuss a successful case study involving research in computer science as well as social welfare. In a long standing collaboration between the North Carolina DHHS and the University of North Carolina, we have (1) successfully built a longitudinal information system that tracks the experiences of families and individuals on welfare in NC since 1995 (2) developed a dynamic website reporting on the various aspects of the welfare program at the county level in order to assist county staff in the administration of the welfare program and (3) developed a new method to analyze sequential data, which can detect common patterns of welfare services given over time.",
"title": ""
},
{
"docid": "11b857de21829051b55aa8318c4c97f7",
"text": "An optimized split-gate-enhanced UMOSFET (SGE-UMOS) layout design is proposed, and its mechanism is investigated by 2-D and 3-D simulations. The layout features trench surrounding mesa (TSM): First, it optimizes the distribution of electric field density in the outer active mesa, reduces the electric-field crowding effect, and improves the breakdown voltage of the SGE-UMOS device. Second, it is unnecessary to design the layout corner with a large diameter in the termination region for the TSM structure as the conventional mesa surrounding trench (MST) structure, which is more efficient in terms of silicon usage. Rsp.on is reduced when compared with the MST structure within the same rectangular chip area. The BV of SGE-UMOS is increased from 72 to 115 V, and Rsp.on is reduced by approximately 3.5% as compared with the MST structure, due to the application of the TSM. Finally, it needs five masks in the process, and the trenches in active and termination regions are formed with the same processing steps; hence, the manufacturing process is simplified, and the cost is reduced as well.",
"title": ""
},
{
"docid": "5184b25a4d056b861f5dbae34300344a",
"text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: h.ashouri@uci.edu",
"title": ""
},
{
"docid": "6a89658d1200b6d2ee6a33e3bf9cb01f",
"text": "No single software fault-detection technique is capable of addressing all fault-detection concerns. Similarly to software reviews and testing, static analysis tools (or automated static analysis) can be used to remove defects prior to release of a software product. To determine to what extent automated static analysis can help in the economic production of a high-quality product, we have analyzed static analysis faults and test and customer-reported failures for three large-scale industrial software systems developed at Nortel Networks. The data indicate that automated static analysis is an affordable means of software fault detection. Using the orthogonal defect classification scheme, we found that automated static analysis is effective at identifying assignment and checking faults, allowing the later software production phases to focus on more complex, functional, and algorithmic faults. A majority of the defects found by automated static analysis appear to be produced by a few key types of programmer errors and some of these types have the potential to cause security vulnerabilities. Statistical analysis results indicate the number of automated static analysis faults can be effective for identifying problem modules. Our results indicate static analysis tools are complementary to other fault-detection techniques for the economic production of a high-quality software product.",
"title": ""
}
] |
scidocsrr
|
670089b7b19ec3fd4d3c5a3551b9e38d
|
A culturally and linguistically responsive vocabulary approach for young Latino dual language learners.
|
[
{
"docid": "e9477e72249764e28945e4bc3a7e6b1e",
"text": "English language learners (ELLs) who experience slow vocabulary development are less able to comprehend text at grade level than their English-only peers. Such students are likely to perform poorly on assessments in these areas and are at risk of being diagnosed as learning disabled. In this article, we review the research on methods to develop the vocabulary knowledge of ELLs and present lessons learned from the research concerning effective instructional practices for ELLs. The review suggests that several strategies are especially valuable for ELLs, including taking advantage of students’ first language if the language shares cognates with English; ensuring that ELLs know the meaning of basic words, and providing sufficient review and reinforcement. Finally, we discuss challenges in designing effective vocabulary instruction for ELLs. Important issues are determining which words to teach, taking into account the large deficits in second-language vocabulary of ELLs, and working with the limited time that is typically available for direct instruction in vocabulary.",
"title": ""
}
] |
[
{
"docid": "cb4f78047b92b773bc30509ca80438a4",
"text": "In this article, we exploit the problem of annotating a large-scale image corpus by label propagation over noisily tagged web images. To annotate the images more accurately, we propose a novel kNN-sparse graph-based semi-supervised learning approach for harnessing the labeled and unlabeled data simultaneously. The sparse graph constructed by datum-wise one-vs-kNN sparse reconstructions of all samples can remove most of the semantically unrelated links among the data, and thus it is more robust and discriminative than the conventional graphs. Meanwhile, we apply the approximate k nearest neighbors to accelerate the sparse graph construction without loosing its effectiveness. More importantly, we propose an effective training label refinement strategy within this graph-based learning framework to handle the noise in the training labels, by bringing in a dual regularization for both the quantity and sparsity of the noise. We conduct extensive experiments on a real-world image database consisting of 55,615 Flickr images and noisily tagged training labels. The results demonstrate both the effectiveness and efficiency of the proposed approach and its capability to deal with the noise in the training labels.",
"title": ""
},
{
"docid": "8c63ce71aaa0409372efeb3ea392394f",
"text": "This paper describes the application of evolutionary fuzzy systems for subgroup discovery to a medical problem, the study on the type of patients who tend to visit the psychiatric emergency department in a given period of time of the day. In this problem, the objective is to characterise subgroups of patients according to their time of arrival at the emergency department. To solve this problem, several subgroup discovery algorithms have been applied to determine which of them obtains better results. The multiobjective evolutionary algorithm MESDIF for the extraction of fuzzy rules obtains better results and so it has been used to extract interesting information regarding the rate of admission to the psychiatric emergency department.",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "40ba65504518383b4ca2a6fabff261fe",
"text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial",
"title": ""
},
{
"docid": "74ea9bde4e265dba15cf9911fce51ece",
"text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.",
"title": ""
},
{
"docid": "7fbc78aead9d65201d921c828b6396cd",
"text": "In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control systemwith a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goaldirected behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goaldirected behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "31d2e56c01f53c25c6c9bfcabe21fcbe",
"text": "In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.",
"title": ""
},
{
"docid": "78744205cf17be3ee5a61d12e6a44180",
"text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.",
"title": ""
},
{
"docid": "519ca18e1450581eb3a7387568dce7cf",
"text": "This paper illustrates the design of a process compensated bias for asynchronous CML dividers for a low power, high performance LO divide chain operating at 4Ghz of input RF frequency. The divider chain provides division by 4,8,12,16,20, and 24. It provides a differential CML level signal for the in-loop modulated transmitter, and 25% duty cycle non-overlapping rail to rail waveforms for I/Q receiver for driving passive mixer. Asynchronous dividers have been used to realize divide by 3 and 5 with 50% duty cycle, quadrature outputs. All the CML dividers use a process compensated bias to compensate for load resistor variation and tail current variation using dual analog feedback loops. Frabricated in 180nm CMOS technology, the divider chain operate over industrial temperature range (−40 to 90°C), and provide outputs in 138–960Mhz range, consuming 2.2mA from 1.8V regulated supply at the highest output frequency.",
"title": ""
},
{
"docid": "36b232e486ee4c9885a51a1aefc8f12b",
"text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.",
"title": ""
},
{
"docid": "12d564ad22b33ee38078f18a95ed670f",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER.",
"title": ""
},
{
"docid": "256376e1867ee923ff72d3376c3be918",
"text": "Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.",
"title": ""
},
{
"docid": "9e359f0d7df4e35c934ce01bf5619622",
"text": "This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.",
"title": ""
},
{
"docid": "67ba6914f8d1a50b7da5024567bc5936",
"text": "Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.",
"title": ""
},
{
"docid": "ae5bf888ce9a61981be60b9db6fc2d9c",
"text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.",
"title": ""
},
{
"docid": "03aec14861b2b1b4e6f091dc77913a5b",
"text": "Taxonomy is indispensable in understanding natural language. A variety of large scale, usage-based, data-driven lexical taxonomies have been constructed in recent years. Hypernym-hyponym relationship, which is considered as the backbone of lexical taxonomies can not only be used to categorize the data but also enables generalization. In particular, we focus on one of the most prominent properties of the hypernym-hyponym relationship, namely, transitivity, which has a significant implication for many applications. We show that, unlike human crafted ontologies and taxonomies, transitivity does not always hold in data-driven lexical taxonomies. We introduce a supervised approach to detect whether transitivity holds for any given pair of hypernym-hyponym relationships. Besides solving the inferencing problem, we also use the transitivity to derive new hypernym-hyponym relationships for data-driven lexical taxonomies. We conduct extensive experiments to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "d284fff9eed5e5a332bb3cfc612a081a",
"text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.",
"title": ""
},
{
"docid": "3ff58e78ac9fe623e53743ad05248a30",
"text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.",
"title": ""
}
] |
scidocsrr
|
3be88697b9e1f17720d351238afc6a71
|
Addictive use of social networking sites can be explained by the interaction of Internet use expectancies, Internet literacy, and psychopathological symptoms
|
[
{
"docid": "9325b8aefbdf9b28f71f891f9f82fd00",
"text": "The aim of this study was to evaluate the extent to which gender and other factors predict the severity of online gaming addiction among Taiwanese adolescents. A total of 395 junior high school students were recruited for evaluation of their experiences playing online games. Severity of addiction, behavioral characteristics, number of stressors, and level of satisfaction with daily life were compared between males and females who had previously played online games. Multiple regression analysis was used to explore gender differences in the relationships between severity of online gaming addiction and a number of variables. This study found that subjects who had previously played online games were predominantly male. Gender differences were also found in the severity of online gaming addiction and motives for playing. Older age, lower self-esteem, and lower satisfaction with daily life were associated with more severe addiction among males, but not among females. Special strategies accounting for gender differences must be implemented to prevent adolescents with risk factors from becoming addicted to online gaming.",
"title": ""
},
{
"docid": "20e10963c305ca422fb025cafc807301",
"text": "The new psychological disorder of Internet addiction is fast accruing both popular and professional recognition. Past studies have indicated that some patterns of Internet use are associated with loneliness, shyness, anxiety, depression, and self-consciousness, but there appears to be little consensus about Internet addiction disorder. This exploratory study attempted to examine the potential influences of personality variables, such as shyness and locus of control, online experiences, and demographics on Internet addiction. Data were gathered from a convenient sample using a combination of online and offline methods. The respondents comprised 722 Internet users mostly from the Net-generation. Results indicated that the higher the tendency of one being addicted to the Internet, the shyer the person is, the less faith the person has, the firmer belief the person holds in the irresistible power of others, and the higher trust the person places on chance in determining his or her own course of life. People who are addicted to the Internet make intense and frequent use of it both in terms of days per week and in length of each session, especially for online communication via e-mail, ICQ, chat rooms, newsgroups, and online games. Furthermore, full-time students are more likely to be addicted to the Internet, as they are considered high-risk for problems because of free and unlimited access and flexible time schedules. Implications to help professionals and student affairs policy makers are addressed.",
"title": ""
}
] |
[
{
"docid": "bef86730221684b8e9236cb44179b502",
"text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.",
"title": ""
},
{
"docid": "639c8142b14f0eed40b63c0fa7580597",
"text": "The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.",
"title": ""
},
{
"docid": "5f4c9518ad93c7916010efcae888cefe",
"text": "Honeypots and similar sorts of decoys represent only the most rudimentary uses of deception in protection of information systems. But because of their relative popularity and cultural interest, they have gained substantial attention in the research and commercial communities. In this paper we will introduce honeypots and similar sorts of decoys, discuss their historical use in defense of information systems, and describe some of their uses today. We will then go into a bit of the theory behind deceptions, discuss their limitations, and put them in the greater context of information protection. 1. Background and History Honeypots and other sorts of decoys are systems or components intended to cause malicious actors to attack the wrong targets. Along the way, they produce potentially useful information for defenders. 1.1 Deception fundamentals According to the American Heritage Dictionary of the English Language (1981): \"deception\" is defined as \"the act of deceit\" \"deceit\" is defined as \"deception\". Fundamentally, deception is about exploiting errors in cognitive systems for advantage. History shows that deception is achieved by systematically inducing and suppressing signals entering the target cognitive system. There have been many approaches to the identification of cognitive errors and methods for their exploitation, and some of these will be explored here. For more thorough coverage, see [68]. Honeypots and decoys achieve this by presenting targets that appear to be useful targets for attackers. To quote Jesus Torres, who worked on honeypots as part of his graduate degree at the Naval Postgradua te School: “For a honeypot to work, it needs to have some honey” Honeypots work by providing something that appears to be desirable to the attacker. The attacker, in searching for the honey of interest, comes across the honeypot, and starts to taste of its wares. If they are appealing enough, the attacker spends significant time and effort getting at the honey provided. If the attacker has finite resources, the time spent going after the honeypot is time not spent going after other things the honeypot is intended to protect. If the attacker uses tools and techniques in attacking the honeypot, some aspects of those tools and techniques are revealed to the defender in the attack on the honeypot. Decoys, like the chaff used to cause information systems used in missiles to go after the wrong objective, induce some signals into the cognitive system of their target (the missile) that, if successful, causes the missile to go after the chaff instead of their real objective. While some readers might be confused for a moment about the relevance of military operations to normal civilian use of deceptions, this example is particularly useful because it shows how information systems are used to deceive other information systems and it is an example in which only the induction of signals is applied. Of course in tactical situations, the real object of the missile attack may also take other actions to suppress its own signals, and this makes the analogy even better suited for this use. Honeypots and decoys only induce signals, they do not suppress them. While other deceptions that suppress signals may be used in concert with honeypots and decoys, the remainder of this paper will focus on signal induction as a deceptive technique and shy away from signal suppression and combinations of signal suppression and induction. 1.2 Historical Deceptions Since long before 800 B.C. when Sun Tzu wrote \"The Art of War\" [28] deception has been key to success in warfare. Similarly, information protection as a field of study has been around for at least 4,000 years [41]. And long before humans documented the use of deceptions, even before humans existed, deception was common in nature. Just as baboons beat their chests, so did early humans, and of course who has not seen the films of Khrushchev at the United Nations beating his shoe on the table and stating “We will bury you!”. While this article is about deceptions involving computer systems, understanding cognitive issues in deception is fundamental to understanding any deception. 1.3 Cognitive Deception Background Many authors have examined facets of deception from both an experiential and cognitive perspective. Chuck Whitlock has built a large part of his career on identifying and demonst rating these sorts of deceptions. [12] His book includes detailed descriptions and examples of scores of common street deceptions. Fay Faron points out that most such confidence efforts are carried as as specific 'plays' and details the anatomy of a 'con' [30]. Bob Fellows [13] takes a detailed approach to how 'magic' and similar techniques exploit human fallibility and cognitive limits to deceive people. Thomas Gilovich [14] provides indepth analysis of human reasoning fallibility by presenting evidence from psychological studies that demonst rate a number of human reasoning mechanisms resulting in erroneous conclusions. Charles K. West [32] describes the steps in psychological and social distortion of information and provides detailed support for cognitive limits leading to deception. Al Seckel [15] provides about 100 excellent examples of various optical illusions, many of which work regardless of the knowledge of the observer, and some of which are defeated after the observer sees them only once. Donald D. Hoffman [36] expands this into a detailed examination of visual intelligence and how the brain processes visual information. It is particularly noteworthy that the visual cortex consumes a great deal of the total human brain space and that it has a great deal of effect on cognition. Deutsch [47] provides a series of demons trations of interpreta tion and misinterpretation of audio information. First Karrass [33] then Cialdini [34] have provided excellent summaries of negotiation strategies and the use of influence to gain advantage. Both also explain how to defend against influence tactics. Cialdini [34] provides a simple structure for influence and asserts that much of the effect of influence techniques is built in and occurs below the conscious level for most people. Robertson and Powers [31] have worked out a more detailed lowlevel theoretical model of cognition based on \"Perceptual Control Theory\" (PCT), but extensions to higher levels of cognition have been highly speculative to date. They define a set of levels of cognition in terms of their order in the control system, but beyond the lowest few levels they have inadequate basis for asserting that these are orders of complexity in the classic control theoretical sense. Their higher level analysis results have also not been shown to be realistic representations of human behaviors. David Lambert [2] provides an extensive collection of examples of deceptions and deceptive techniques mapped into a cognitive model intended for modeling deception in military situations. These are categorized into cognitive levels in Lambert's cognitive model. Charles Handy [37] discusses organizational structures and behaviors and the roles of power and influence within organizations. The National Research Council (NRC) [38] discusses models of human and organizational behavior and how automation has been applied in this area. The NRC report includes scores of examples of modeling techniques and details of simulation implementa tions based on those models and their applicability to current and future needs. Greene [46] describes the 48 laws of power and, along the way, demonst rates 48 methods that exert compliance forces in an organization. These can be traced to cognitive influences and mapped out using models like Lambert 's, Cialdini's, and the one we describe later in this paper. Closely related to the subject of deception is the work done by the CIA on the MKULTRA project. [52] A good summary of some of the pre1990 results on psychological aspects of self deception is provided in Heuer's CIA book on the psychology of intelligence analysis. [49] Heuer goes one step further in trying to start assessing ways to counter deception, and concludes that intelligence analysts can make improvements in their presentation and analysis process. Several other papers on deception detection have been written and substantially summarized in Vrij's book on the subject.[50] All of these books and papers are summarized in more detail in “A Framework for Deception” [68] which provides much of the basis for the historical issues in this paper as well as other related issues in deception not limited to honeypots, decoys, and signal induction deceptions. In addition, most of the computer deception background presented next is derived from this paper. 1.4 Computer Deception Background The most common example of a computer security mechanism based on deception is the response to attempted logins on most modern computer systems. When a user first attempts to access a system, they are asked for a user identification (UID) and password. Regardless of whether the cause of a failed access attempt was the result of a nonexistent UID or an invalid password for that UID, a failed attempt is met with the same message. In text based access methods, the UID is typically requested first and, even if no such UID exists in the system, a password is requested. Clearly, in such systems, the computer can identify that no such UID exists without asking for a password. And yet these systems intentionally suppress the information that no such UID exist and induce a message designed to indicate that the UID does exist. In earlier systems where this was not done, attackers exploited the result so as to gain additional information about which UIDs were on the system and this dramatically reduced their difficulty in attack. This is a very widely accepted practice, and when presented as a deception, many people who otherwise object to deceptions in computer systems indicate that this somehow doesn’t count as a d",
"title": ""
},
{
"docid": "02c1c424e4511219cc2e857a3c39de32",
"text": "We propose a unified architecture for next generation cognitive, low cost, mobile internet. The end user platform is able to scale as per the application and network requirements. It takes computing out of the data center and into end user platform. Internet enables open standards, accessible computing and applications programmability on a commodity platform. The architecture is a super-set to present day infrastructure web computing. The Java virtual machine (JVM) derives from the stack architecture. Applications can be developed and deployed on a multitude of host platforms. O(1)→ O(N). Computing and the internet today are more accessible and available to the larger community. Machine learning has made extensive advances with the availability of modern computing. It is used widely in NLP, Computer Vision, Deep learning and AI. A prototype device for mobile could contain N compute and N MB of memory. Keywords— mobile, server, internet",
"title": ""
},
{
"docid": "1474c61cba04ac391079082d175c5532",
"text": "With an increasing understanding of the aging process and the rapidly growing interest in minimally invasive treatments, injectable facial fillers have changed the perspective for the treatment and rejuvenation of the aging face. Other than autologous fat and certain preformed implants, the collagen family products were the only Food and Drug Administration approved soft tissue fillers. But the overwhelming interest in soft tissue fillers had led to the increase in research and development of other products including bioengineered nonpermanent implants and permanent alloplastic implants. As multiple injectable soft tissue fillers and biostimulators are continuously becoming available, it is important to understand the biophysical properties inherent in each, as these constitute the clinical characteristics of the product. This article will review the materials and properties of the currently available soft tissue fillers: hyaluronic acid, calcium hydroxylapatite, poly-l-lactic acid, polymethylmethacrylate, and autologous fat (and aspirated tissue including stem cells).",
"title": ""
},
{
"docid": "bf9e828c9e3ee8d64d387cd518fb6b2d",
"text": "As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices—wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.",
"title": ""
},
{
"docid": "0e0b0b6b0fdab06fa9d3ebf6a8aefd6b",
"text": "Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.",
"title": ""
},
{
"docid": "985e8fae88a81a2eec2ca9cc73740a0f",
"text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.",
"title": ""
},
{
"docid": "37a0c6ac688c7d7f2dd622ebbe3ec184",
"text": "Prior research shows that directly applying phrase-based SMT on lexical tokens to migrate Java to C# produces much semantically incorrect code. A key limitation is the use of sequences in phrase-based SMT to model and translate source code with well-formed structures. We propose mppSMT, a divide-and-conquer technique to address that with novel training and migration algorithms using phrase-based SMT in three phases. First, mppSMT treats a program as a sequence of syntactic units and maps/translates such sequences in two languages to one another. Second, with a syntax-directed fashion, it deals with the tokens within syntactic units by encoding them with semantic symbols to represent their data and token types. This encoding via semantic symbols helps better migration of API usages. Third, the lexical tokens corresponding to each sememe are mapped or migrated. The resulting sequences of tokens are merged together to form the final migrated code. Such divide-and-conquer and syntax-direction strategies enable phrase-based SMT to adapt well to syntactical structures in source code, thus, improving migration accuracy. Our empirical evaluation on several real-world systems shows that 84.8 -- 97.9% and 70 -- 83% of the migrated methods are syntactically and semantically correct, respectively. 26.3 -- 51.2% of total migrated methods are exactly matched to the human-written C# code in the oracle. Compared to Java2CSharp, a rule-based migration tool, it achieves higher semantic accuracy from 6.6 -- 57.7% relatively. Importantly, it does not require manual labeling for training data or manual definition of rules.",
"title": ""
},
{
"docid": "1debcbf981ae6115efcc4a853cd32bab",
"text": "Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.",
"title": ""
},
{
"docid": "4adfa3026fbfceca68a02ee811d8a302",
"text": "Designing a new domain specific language is as any other complex task sometimes error-prone and usually time consuming, especially if the language shall be of high-quality and comfortably usable. Existing tool support focuses on the simplification of technical aspects but lacks support for an enforcement of principles for a good language design. In this paper we investigate guidelines that are useful for designing domain specific languages, largely based on our experience in developing languages as well as relying on existing guidelines on general purpose (GPLs) and modeling languages. We defined guidelines to support a DSL developer to achieve better quality of the language design and a better acceptance among its users.",
"title": ""
},
{
"docid": "9d95535e6aee8acb6a613211223c3341",
"text": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms.",
"title": ""
},
{
"docid": "b7ec7f1c2cef561a979dae311322dd39",
"text": "We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these “ambient media.”",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "3f5aa023f0cda7e56c0004e57a8b60e3",
"text": "The contribution of this paper is two-fold. First, a connection is established between approximating the size of the largest clique in a graph and multi-prover interactive proofs. Second, an efficient multi-prover interactive proof for NP languages is constructed, where the verifier uses very few random bits and communication bits. Last, the connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph.\nOf independent interest is our proof of correctness for the multilinearity test of functions.",
"title": ""
},
{
"docid": "c40168c28ca6ae6174ede1046eb2ec8c",
"text": "This paper proposes a wide pulse combined with a narrow-pulse generator for solid-food sterilization. The proposed generator is composed of a full-bridge converter in phase-shift control to generate a high dc-link voltage and a full-bridge inverter associated with an L-C network and a transformer to generate wide pulses combined with narrow pulses. These combined pulses can prevent undesired strong air arcing in free space, reduce power consumption, and save power components, while sterilizing food effectively. The converter and inverter can be operated at high frequencies and with pulse width-modulation control; thus, its weight and size can be reduced significantly, and its efficiency can correspondingly be improved. Experimental results obtained from a prototype with plusmn10-kV wide pulses combined with plusmn10-kV narrow pulses and with 10- to 50-kW peak output power, depending on pulsewidth of the output pulses, have demonstrated its feasibility.",
"title": ""
},
{
"docid": "8a4ff0af844823400d1ce707fd57e16f",
"text": "In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neural lattice language models that utilize polysemous embeddings are able to improve perplexity by 9.95% relative to a word-level baseline, and that a Chinese model that handles multi-character tokens is able to improve perplexity by 20.94% relative to a character-level baseline.",
"title": ""
},
{
"docid": "b2db53f203f2b168ec99bd8e544ff533",
"text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.",
"title": ""
},
{
"docid": "4110d0601a31430dd5d415fea453ae43",
"text": "With the fast development of mobile Internet, Internet of Things (IoT) has been found in many important applications recently. However, it still faces many challenges in security and privacy. Blockchain (BC) technology, which underpins the cryptocurrency Bitcoin, has played an important role in the development of decentralized and data intensive applications running on millions of devices. In this paper, to establish the relationship between IoT and BC for device credibility verification, we propose a framework with layers, intersect, and self-organization Blockchain Structures (BCS). In this new framework, each BCS is organized by Blockchain technology. We describe the credibility verification method and show how it provide the verification. The efficiency and security analysis are also given in this paper, including its response time, storage efficiency, and verification. The conducted experiments have been shown to demonstrate the validity of the proposed method in satisfying the credible requirement achieved by Blockchain technology and certain advantages in storage space and response time.",
"title": ""
},
{
"docid": "b576ffcda7637e3c2e45194ab16f8c26",
"text": "This paper presents an asynchronous pipelined all-digital 10-b time-to-digital converter (TDC) with fine resolution, good linearity, and high throughput. Using a 1.5-b/stage pipeline architecture, an on-chip digital background calibration is implemented to correct residue subtraction error in the seven MSB stages. An asynchronous clocking scheme realizes pipeline operation for higher throughput. The TDC was implemented in standard 0.13-μm CMOS technology and has a maximum throughput of 300 MS/s and a resolution of 1.76 ps with a total conversion range of 1.8 ns. The measured DNL and INL were 0.6 LSB and 1.9 LSB, respectively.",
"title": ""
}
] |
scidocsrr
|
968058449c28baf1c6060e88d9e49636
|
Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences
|
[
{
"docid": "69fd3e6e9a1fc407d20b0fb19fc536e3",
"text": "In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.",
"title": ""
}
] |
[
{
"docid": "bbdf68b20aed9801ece9dc2adaa46ba5",
"text": "Coflow is a collection of parallel flows, while a job consists of a set of coflows. A job is completed if all of the flows completes in the coflows. Therefore, the completion time of a job is affected by the latest flows in the coflows. To guarantee the job completion time and service performance, the job deadline and the dependency of coflows needs to be considered in the scheduling process. However, most existing methods ignore the dependency of coflows which is important to guarantee the job completion. In this paper, we take the dependency of coflows into consideration. To guarantee job completion for performance, we formulate a deadline and dependency-based model called MTF scheduler model. The purpose of MTF model is to minimize the overall completion time with the constraints of deadline and network capacity. Accordingly, we propose our method to schedule dependent coflows. Especially, we consider the dependent coflows as an entirety and propose a valuable coflow scheduling first MTF algorithm. We conduct extensive simulations to evaluate MTF method which outperforms the conventional short job first method as well as guarantees the job deadline.",
"title": ""
},
{
"docid": "9ca208c420c5c9e4592bf86f1245056d",
"text": "No one had given Muhammad Ali a chance against George Foreman in the World Heavyweight Championship ght of October 30, 1974. Foreman, none of whose opponents had lasted more than three rounds in the ring, was the strongest, hardest hitting boxer of his generation. Ali, though not as powerful as Foreman, had a slightly faster punch and was lighter on his feet. In the weeks leading up to the ght, however, Foreman had practiced against nimble sparring partners. He was ready. But when the bell rang just after 4:00 a.m. in Kinshasa, something completely unexpected happened. In round two, instead of moving into the ring to meet Foreman, Ali appeared to cower against the ropes. Foreman, now condent of victory, pounded him again and again, while Ali whispered hoarse taunts: “George, you’re not hittin’,” “George, you disappoint me.” Foreman lost his temper, and his punches became a furious blur. To spectators, unaware that the elastic ring ropes were absorbing much of the force of Foreman’s blows, it looked as if Ali would surely fall. By the fth round, however, Foreman was worn out. And in round eight, as stunned commentators and a delirious crowd looked on, Muhammad Ali knocked George Foreman to the canvas, and the ght was over. The outcome of that now-famous “rumble in the jungle” was completely unexpected. The two ghters were equally motivated to win: Both had boasted of victory, and both had enormous egos. Yet in the end, a ght that should have been over in three rounds went eight, and Foreman’s prodigious punches proved useless against Ali’s rope-a-dope strategy. This ght illustrates an important yet relatively unexplored feature of interstate conict: how a weak actor’s strategy can make a strong actor’s power irHow the Weak Win Wars Ivan Arreguín-Toft",
"title": ""
},
{
"docid": "9acc03449f1b51188257b7e05c561c2a",
"text": "When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection —images of classes which are not in the training set but are related to those—, and anomaly detection —images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.",
"title": ""
},
{
"docid": "5c04f381c2b3de1377e1988b4fb64ecd",
"text": "The study of bullying behavior and its consequences for young people depends on valid and reliable measurement of bullying victimization and perpetration. Although numerous self-report bullying-related measures have been developed, robust evidence of their psychometric properties is scant, and several limitations inhibit their applicability. The Forms of Bullying Scale (FBS), with versions to measure bullying victimization (FBS-V) and perpetration (FBS-P), was developed on the basis of existing instruments, for use with 12- to 15-year-old adolescents to economically, yet comprehensively measure both bullying perpetration and victimization. Measurement properties were estimated. Scale validity was tested using data from 2 independent studies of 3,496 Grade 8 and 783 Grade 8-10 students, respectively. Construct validity of scores on the FBS was shown in confirmatory factor analysis. The factor structure was not invariant across gender. Strong associations between the FBS-V and FBS-P and separate single-item bullying items demonstrated adequate concurrent validity. Correlations, in directions as expected with social-emotional outcomes (i.e., depression, anxiety, conduct problems, and peer support), provided robust evidence of convergent and discriminant validity. Responses to the FBS items were found to be valid and concurrently reliable measures of self-reported frequency of bullying victimization and perpetration, as well as being useful to measure involvement in the different forms of bullying behaviors. (PsycINFO Database Record (c) 2013 APA, all rights reserved).",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "98e9dff9ba946dc1ea6d50b1271a0685",
"text": "OBJECTIVES\nTo evaluate the effect of Carbopol gel formulations containing pilocarpine on the morphology and morphometry of the vaginal epithelium of castrated rats.\n\n\nMETHODS\nThirty-one female Wistar-Hannover rats were randomly divided into four groups: the control Groups I (n=7, rats in persistent estrus; positive controls) and II (n=7, castrated rats, negative controls) and the experimental Groups, III (n=8) and IV (n=9). Persistent estrus (Group I) was achieved with a subcutaneous injection of testosterone propionate on the second postnatal day. At 90 days postnatal, rats in Groups II, III and IV were castrated and treated vaginally for 14 days with Carbopol gel (vehicle alone) or Carbopol gel containing 5% and 15% pilocarpine, respectively. Next, all of the animals were euthanized and their vaginas were removed for histological evaluation. A non-parametric test with a weighted linear regression model was used for data analysis (p<0.05).\n\n\nRESULTS\nThe morphological evaluation showed maturation of the vaginal epithelium with keratinization in Group I, whereas signs of vaginal atrophy were present in the rats of the other groups. Morphometric examinations showed mean thickness values of the vaginal epithelium of 195.10±12.23 μm, 30.90±1.14 μm, 28.16±2.98 μm and 29.84±2.30 μm in Groups I, II, III and IV, respectively, with statistically significant differences between Group I and the other three groups (p<0.0001) and no differences between Groups II, III and IV (p=0.0809).\n\n\nCONCLUSION\nTopical gel formulations containing pilocarpine had no effect on atrophy of the vaginal epithelium in the castrated female rats.",
"title": ""
},
{
"docid": "23615c8affc64304b2dab6b5d7e9b77b",
"text": "Softmax loss is widely used in deep neural networks for multi-class classification, where each class is represented by a weight vector, a sample is represented as a feature vector, and the feature vector has the largest projection on the weight vector of the correct category when the model correctly classifies a sample. To ensure generalization, weight decay that shrinks the weight norm is often used as regularizer. Different from traditional learning algorithms where features are fixed and only weights are tunable, features are also tunable as representation learning in deep learning. Thus, we propose feature incay to also regularize representation learning, which favors feature vectors with large norm when the samples can be correctly classified. With the feature incay, feature vectors are further pushed away from the origin along the direction of their corresponding weight vectors, which achieves better inter-class separability. In addition, the proposed feature incay encourages intra-class compactness along the directions of weight vectors by increasing the small feature norm faster than the large ones. Empirical results on MNIST, CIFAR10 and CIFAR100 demonstrate feature incay can improve the generalization ability.",
"title": ""
},
{
"docid": "831836deb75aacb54513004daa92e1bf",
"text": "Jean Watson introduced The Theory of Human Caring over thirty years ago to the nursing profession. In the theory it is stated that caring is the essence of nursing and that professional nurses have an obligation to provide the best environment for healing to take place. The theory’s carative factors outlines principles and ideas that should be used by the professional nurse to create the best environment for healing of the patient and of the nurse. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 3 Reflections on Jean Watson's Theory of Human Caring Florence Nightingale helped define the role of the nurse over one hundred and fifty years ago. Even so, nursing has struggled to find an identity apart from medicine. For years nursing theorists have examined how nursing is unique from medicine. While it was obvious that nursing was a different art than medicine, there was not any scholarly work to illustrate the difference. During the 1950’s nursing began building a body of knowledge, which interpreted and conceptualized the intricacies of nursing. Over the next several decades, nurse theorists rapidly grew the discipline’s foundation. One of the concepts that emerged was nursing as caring. Several theorists have identified caring as being central to nursing; however, Watson’s Theory of Human Caring offers a unique perspective. The theory blends the beliefs and ideas from Eastern and Western cultures to create a spiritual philosophy that can be used throughout nursing practice. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. Introduction to the Theory The Theory of Human Caring evolved from Jean Watson’s own desire to develop a deeper understanding of the meaning of humanity and life. She was also greatly influenced by her background in philosophy, psychology and nursing science. Watson’s first book Nursing: The Philosophy and Science of Caring (1979) was developed to bring a “new meaning and dignity” to nursing care (Watson, 2008). The first book introduced carative factors, which are the foundation of Watson’s Theory of Human Caring. The carative factors offered a holistic perspective to caring for a patient, juxtaposed to the reductionist, biophysical model that was prevalent at the time. Watson believed that without incorporating the carative factors, a nurse REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 4 was only performing tasks when treating a patient and not offering professional nursing care (Watson, 2008). In Watson’s second book, Nursing: Human Science and Human Care, A Theory of Nursing (1985), she discusses the philosophical and spiritual components of the Theory of Human Caring, as well as expands upon the definition and implications of the transpersonal moment. The second book redefines caring as a combination of scientific actions, consciousness and intentionality, as well as defines the transcendental phenomenology of a transpersonal caring occasion and expands upon the idea of human-to-human connection. Watson’s third book, Postmodern Nursing and Beyond (1999), focuses on the evolution of the consciousness of the clinician. The third book reinforces the ideas of the first two books and further evolves several concepts to include the spiritual realm, the energetic realm, the interconnectedness to all things and the higher power. The philosophy behind each book and the Theory of Human Caring is that all human beings are connected to each other and to a divine spirit or higher power. Furthermore, each interaction between human beings, but specifically between nurses and patients, should be entered into with the intention of connecting with the patient’s spirit or higher source. Each moment or each act can and should not only facilitate healing in the patient and the nurse, but also transcend both space and time. The components of Watson’s theories include the 10 carative factors, the caritas process, the transpersonal caring relationship, caring moments and caring/healing modalities. Carative factors are the essential characteristics needed by the professional nurse to establish a therapeutic relationship and promote healing. Carative factors are the core of Watson’s philosophy and they are (i) formation of a humanistic-altruistic systems of values, (ii) instillation of faith-hope, (iii) cultivation of sensitivity to one’s self and to others, (iv) development of a helping-trusting REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 5 human caring relationship, (v) promotion and acceptance of the expression of positive and negative feelings, (vi) systematic use of a creative problem solving and caring process, (vii) promotion of transpersonal teaching-learning, (viii) provision for supportive, protective, and/or corrective mental, physical, societal and spiritual environment, (ix) assistance with gratification of human needs and (x) allowance for existential-phenomenological-spiritual forces. Carative factors are intended to provide a foundation for the discipline of nursing that is developed from understanding and compassion. Watson’s caritas processes are the expansion of the original carative factors and are reflective of Watson’s own personal evolution. The caritas processes provide the tenets for a professional approach to caring, a means by which to practice caring in a spiritual and loving fashion. The transpersonal caring relationship is a relationship that goes beyond one’s self and creates a change in the energetic environment of the nurse and the patient. A transpersonal caring relationship allows for a relationship between the souls of the individuals and because of this authentic relationship, optimal caring and healing can take place (Watson, 1985). In the transpersonal relationship the caregiver is aware of his/her intention and performs care that is emanating from the heart. When intentionality is focused and delivered from the heart, unseen energetic fields can change and promote an environment for healing. When a nurse is more conscious of his or her self and surroundings, he or she acts from a place of love with each caring moment. Caring moments are any moments in which a nurse has an interaction with a patient or family and is using the carative factors or the caritas process. In order for a caring moment to occur the participation of the nurse and the patient is required. Practice based on the carative factors presents an opportunity for both the nurse and patient to engage in a transpersonal caring REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 6 moment that benefits the mind, body and soul of each person. The caring/healing modalities are practices that enhance the ability of the care provider to engage in transpersonal relationship and caring moments. Caring/healing exercises can be as simple as centering, being attentive to touch or the communication of specific knowledge. The goal of using Watson’s principles in practice is to enhance the life and experience of the nurse and of the patient. Description of Theory Purpose The Theory of Human Caring was developed based on Watson’s desire to reestablish holistic practice in nursing care and move away from the cold and disconnected scientific model while infusing feeling and caring back into nursing practice (Watson, 2008). The purpose of the theory was to provide a philosophical-ethical foundation from which the nurse could provide care. The proposed benefit of this theory for both the nurse and the patient is that when each person reveals his or her authentic self and engages in interactions with another being, the energetic field around both of them will change and enhance the healing environment. The theory’s purpose is quite broad, promoting healing and oneness with the universe through caring. The positive impact of these practices is phenomenal and the beauty of the theory is that the caritas processes can be used to enhance any practice. When applied to nursing practice, the theory reestablishes Florence Nightingale’s vision that nursing is a spiritual calling. The deeper message within the theory is that being/relating to others from a place of love can transcend the planes and energetic fields of the universe and promote healing to one’s self and to",
"title": ""
},
{
"docid": "6f05e76961d4ef5fc173bafd5578081f",
"text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.",
"title": ""
},
{
"docid": "d836f8b9c13ba744f39daa5887bed52e",
"text": "Cerebral palsy is the most common cause of childhood-onset, lifelong physical disability in most countries, affecting about 1 in 500 neonates with an estimated prevalence of 17 million people worldwide. Cerebral palsy is not a disease entity in the traditional sense but a clinical description of children who share features of a non-progressive brain injury or lesion acquired during the antenatal, perinatal or early postnatal period. The clinical manifestations of cerebral palsy vary greatly in the type of movement disorder, the degree of functional ability and limitation and the affected parts of the body. There is currently no cure, but progress is being made in both the prevention and the amelioration of the brain injury. For example, administration of magnesium sulfate during premature labour and cooling of high-risk infants can reduce the rate and severity of cerebral palsy. Although the disorder affects individuals throughout their lifetime, most cerebral palsy research efforts and management strategies currently focus on the needs of children. Clinical management of children with cerebral palsy is directed towards maximizing function and participation in activities and minimizing the effects of the factors that can make the condition worse, such as epilepsy, feeding challenges, hip dislocation and scoliosis. These management strategies include enhancing neurological function during early development; managing medical co-morbidities, weakness and hypertonia; using rehabilitation technologies to enhance motor function; and preventing secondary musculoskeletal problems. Meeting the needs of people with cerebral palsy in resource-poor settings is particularly challenging.",
"title": ""
},
{
"docid": "baad68c1adef7b72d78745fe03db0c57",
"text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.10.039 ⇑ Corresponding author. E-mail addresses: pcortez@dsi.uminho.pt (P. Cor In this paper, we propose a new visualization approach based on a Sensitivity Analysis (SA) to extract human understandable knowledge from supervised learning black box data mining models, such as Neural Networks (NNs), Support Vector Machines (SVMs) and ensembles, including Random Forests (RFs). Five SA methods (three of which are purely new) and four measures of input importance (one novel) are presented. Also, the SA approach is adapted to handle discrete variables and to aggregate multiple sensitivity responses. Moreover, several visualizations for the SA results are introduced, such as input pair importance color matrix and variable effect characteristic surface. A wide range of experiments was performed in order to test the SA methods and measures by fitting four well-known models (NN, SVM, RF and decision trees) to synthetic datasets (five regression and five classification tasks). In addition, the visualization capabilities of the SA are demonstrated using four real-world datasets (e.g., bank direct marketing and white wine quality). 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "58de521ab563333c2051b590592501a8",
"text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "f6e90401ea52689801b164ef8167814c",
"text": "In this paper, we develop novel, efficient 2D encodings for 3D geometry, which enable reconstructing full 3D shapes from a single image at high resolution. The key idea is to pose 3D shape reconstruction as a 2D prediction problem. To that end, we first develop a simple baseline network that predicts entire voxel tubes at each pixel of a reference view. By leveraging well-proven architectures for 2D pixel-prediction tasks, we attain state-of-the-art results, clearly outperforming purely voxel-based approaches. We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll. This allows reconstructing highly detailed shapes with complex topology, as demonstrated in extensive experiments; we clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Our Matryoshka networks further enable reconstructing shapes from IDs or shape similarity, as well as shape sampling.",
"title": ""
},
{
"docid": "f022871509e863f6379d76ba80afaa2f",
"text": "Neuroeconomics seeks to gain a greater understanding of decision making by combining theoretical and methodological principles from the fields of psychology, economics, and neuroscience. Initial studies using this multidisciplinary approach have found evidence suggesting that the brain may be employing multiple levels of processing when making decisions, and this notion is consistent with dual-processing theories that have received extensive theoretical consideration in the field of cognitive psychology, with these theories arguing for the dissociation between automatic and controlled components of processing. While behavioral studies provide compelling support for the distinction between automatic and controlled processing in judgment and decision making, less is known if these components have a corresponding neural substrate, with some researchers arguing that there is no evidence suggesting a distinct neural basis. This chapter will discuss the behavioral evidence supporting the dissociation between automatic and controlled processing in decision making and review recent literature suggesting potential neural systems that may underlie these processes.",
"title": ""
},
{
"docid": "a2f062482157efb491ca841cc68b7fd3",
"text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.",
"title": ""
},
{
"docid": "982ebb6c33a1675d3073896e3768212a",
"text": "Morphometric analysis of nuclei play an essential role in cytological diagnostics. Cytological samples contain hundreds or thousands of nuclei that need to be examined for cancer. The process is tedious and time-consuming but can be automated. Unfortunately, segmentation of cytological samples is very challenging due to the complexity of cellular structures. To deal with this problem, we are proposing an approach, which combines convolutional neural network and ellipse fitting algorithm to segment nuclei in cytological images of breast cancer. Images are preprocessed by the colour deconvolution procedure to extract hematoxylin-stained objects (nuclei). Next, convolutional neural network is performing semantic segmentation of preprocessed image to extract nuclei silhouettes. To find the exact location of nuclei and to separate touching and overlapping nuclei, we approximate them using ellipses of various sizes and orientations. They are fitted using the Bayesian object recognition approach. The accuracy of the proposed approach is evaluated with the help of reference nuclei segmented manually. Tests carried out on breast cancer images have shown that the proposed method can accurately segment elliptic-shaped objects.",
"title": ""
},
{
"docid": "bb74cbb76c6efb4a030d2c5653e18842",
"text": "Two new wideband in-phase and out-of-phase balanced power dividing/combining networks are proposed in this paper. Based on matrix transformation, the differential-mode and common-mode equivalent circuits of the two wideband in-phase and out-of-phase networks can be easily deduced. A patterned ground-plane technique is used to realize the strong coupling of the shorted coupled lines for the differential mode. Two planar wideband in-phase and out-of-phase balanced networks with bandwidths of 55.3% and 64.4% for the differential mode with wideband common-mode suppression are designed and fabricated. The theoretical and measured results agree well with each other and show good in-band performances.",
"title": ""
},
{
"docid": "1994ae6f7de73b30729f274e70e4899f",
"text": "Being symmetric positive-definite (SPD), covariance matrix has traditionally been used to represent a set of local descriptors in visual recognition. Recent study shows that kernel matrix can give considerably better representation by modelling the nonlinearity in the local descriptor set. Nevertheless, neither the descriptors nor the kernel matrix is deeply learned. Worse, they are considered separately, hindering the pursuit of an optimal SPD representation. This work proposes a deep network that jointly learns local descriptors, kernel-matrix-based SPD representation, and the classifier via an end-to-end training process. We derive the derivatives for the mapping from a local descriptor set to the SPD representation to carry out backpropagation. Also, we exploit the Daleckǐi-Krěin formula in operator theory to give a concise and unified result on differentiating SPD matrix functions, including ∗Corresponding author (leiw@uow.edu.au) 1 ar X iv :1 71 1. 04 04 7v 1 [ cs .C V ] 1 1 N ov 2 01 7 the matrix logarithm to handle the Riemannian geometry of kernel matrix. Experiments not only show the superiority of kernel-matrixbased SPD representation with deep local descriptors, but also verify the advantage of the proposed deep network in pursuing better SPD representations for fine-grained image recognition tasks.",
"title": ""
},
{
"docid": "fee7fc5639a66e68666a58ecec8e88d1",
"text": "Most previous studies assert the negative effect of loneliness on social life and an individual's well-being when individuals use the Internet. To expand this previous research tradition, the current study proposes a model to test whether loneliness has a direct or indirect effect on well-being when mediated by self-disclosure and social support. The results show that loneliness has a direct negative impact on well-being but a positive effect on self-disclosure. While self-disclosure positively influences social support, self-disclosure has no impact on well-being, and social support positively influences well-being. The results also show a full mediation effect of social support in the self-disclosure to well-being link. The results imply that even if lonely people's well-being is poor, their well-being can be enhanced through the use of SNSs, including self-presentation and social support from their friends.",
"title": ""
}
] |
scidocsrr
|
943b80cba2f5739940c34b988807349a
|
Apache REEF: Retainable Evaluator Execution Framework
|
[
{
"docid": "47ac4b546fe75f2556a879d6188d4440",
"text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.",
"title": ""
}
] |
[
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "88bd6fe890ed385ae60ace44ab71db3e",
"text": "Background: While concerns about adverse health outcomes of unintended pregnancies for the mother have been expressed, there has only been limited research on the outcomes of unintended pregnancies. This review provides an overview of antecedents and maternal health outcomes of unintended pregnancies (UIPs) carried to term live",
"title": ""
},
{
"docid": "ee9709e756c90f20506ebbddefaeb309",
"text": "OBJECTIVES/HYPOTHESIS\nTo compare three existing endoscopic scoring systems and a newly proposed modified scoring system for the assessment of patients with chronic rhinosinusitis (CRS).\n\n\nSTUDY DESIGN\nBlinded, prospective cohort study.\n\n\nMETHODS\nCRS patients completed two patient-reported outcome measures (PROMs)-the visual analogue scale (VAS) symptom score and the Sino-Nasal Outcome Test-22 (SNOT-22)-and then underwent a standardized, recorded sinonasal endoscopy. Videos were scored by three blinded rhinologists using three scoring systems: the Lund-Kennedy (LK) endoscopic score; the Discharge, Inflammation, Polyp (DIP) score; and the Perioperative Sinonasal Endoscopic score. The videos were further scored using a modified Lund-Kennedy (MLK) endoscopic scoring system, which retains the LK subscores of polyps, edema, and discharge but eliminates the scoring of scarring and crusting. The systems were compared for test-retest and inter-rater reliability as well as for their correlation with PROMs.\n\n\nRESULTS\nOne hundred two CRS patients were enrolled. The MLK system showed the highest inter-rater and test-retest reliability of all scoring systems. All systems except for the DIP correlated with total VAS scores. The MLK was the only system that correlated with the symptom subscore of the SNOT-22 in both unoperated and postoperative patients.\n\n\nCONCLUSIONS\nModification of the LK system by excluding the subscores of scarring and crusting improves its reliability and its correlation with PROMs. In addition, the MLK system retains the familiarity of the widely used LK system and is applicable to any patient irrespective of surgical status. The MLK system may be a more suitable and reliable endoscopic scoring system for clinical practice and outcomes research.",
"title": ""
},
{
"docid": "79cb7d3bbdb6ebedc3941e8f35897fc9",
"text": "Occurrences of entrapment neuropathies of the lower extremity are relatively infrequent; therefore, these conditions may be underappreciated and difficult to diagnose. Understanding the anatomy of the peripheral nerves and their potential entrapment sites is essential. A detailed physical examination and judicious use of imaging modalities are also vital when establishing a diagnosis. Once an accurate diagnosis is obtained, treatment is aimed at reducing external pressure, minimizing inflammation, correcting any causative foot and ankle deformities, and ultimately releasing any constrictive tissues.",
"title": ""
},
{
"docid": "f3ed5e6eb8fd450830360e9bc1bad340",
"text": "Musical performance requires prediction to operate instruments, to perform in groups and to improvise. We argue, with reference to a number of digital music instruments (DMIs), including two of our own, that predictive machine learning models can help interactive systems to understand their temporal context and ensemble behaviour. We also discuss how recent advances in deep learning highlight the role of prediction in DMIs, by allowing data-driven predictive models with a long memory of past states. We advocate for predictive musical interaction, where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. We propose a framework for characterising prediction as relating to the instrumental sound, ongoing musical process, or between members of an ensemble. Our framework shows that different musical interface design configurations lead to different types of prediction. We show that our framework accommodates deep generative models, as well as models for predicting gestural states, or other high-level musical information. We apply our framework to examples from our recent work and the literature, and discuss the benefits and challenges revealed by these systems as well as musical use-cases where prediction is a necessary component.",
"title": ""
},
{
"docid": "ad6672657fc07ed922f1e2c0212b30bc",
"text": "As a generalization of the ordinary wavelet transform, the fractional wavelet transform (FRWT) is a very promising tool for signal analysis and processing. Many of its fundamental properties are already known; however, little attention has been paid to its sampling theory. In this paper, we first introduce the concept of multiresolution analysis associated with the FRWT, and then propose a sampling theorem for signals in FRWT-based multiresolution subspaces. The necessary and sufficient condition for the sampling theorem is derived. Moreover, sampling errors due to truncation and aliasing are discussed. The validity of the theoretical derivations is demonstrated via simulations.",
"title": ""
},
{
"docid": "afeb909f4be9da56dcaeb86d464ec75e",
"text": "Synthesizing expressive speech with appropriate prosodic variations, e.g., various styles, still has much room for improvement. Previous methods have explored to use manual annotations as conditioning attributes to provide variation information. However, the related training data are expensive to obtain and the annotated style codes can be ambiguous and unreliable. In this paper, we explore utilizing the residual error as conditioning attributes. The residual error is the difference between the prediction of a trained average model and the ground truth. We encode the residual error into a style embedding via a neural networkbased error encoder. The style embedding is then fed to the target synthesis model to provide information for modeling various style distributions more accurately. The average model and the error encoder are jointly optimized with the target synthesis model. Our proposed method has two advantages: 1) the embedding is automatically learned with no need of manual style annotations, which helps overcome data sparsity and ambiguity limitations; 2) For any unseen audio utterance, the style embedding can be efficiently generated. This enables rapid adaptation to the desired style to be achieved with only a single adaptation utterance. Experimental results show that our proposed method outperforms the baseline model in both speech quality and style similarity.",
"title": ""
},
{
"docid": "b637196c4627fd463ca54d0efeb87370",
"text": "Vision-based lane detection is a critical component of modern automotive active safety systems. Although a number of robust and accurate lane estimation (LE) algorithms have been proposed, computationally efficient systems that can be realized on embedded platforms have been less explored and addressed. This paper presents a framework that incorporates contextual cues for LE to further enhance the performance in terms of both computational efficiency and accuracy. The proposed context-aware LE framework considers the state of the ego vehicle, its surroundings, and the system-level requirements to adapt and scale the LE process resulting in substantial computational savings. This is accomplished by synergistically fusing data from multiple sensors along with the visual data to define the context around the ego vehicle. The context is then incorporated as an input to the LE process to scale it depending on the contextual requirements. A detailed evaluation of the proposed framework on real-world driving conditions shows that the dynamic and static configuration of the lane detection process results in computation savings as high as 90%, without compromising on the accuracy of LE.",
"title": ""
},
{
"docid": "d54ad1a912a0b174d1f565582c6caf1c",
"text": "This paper presents a new novel design of a smart walker for rehabilitation purpose by patients in hospitals and rehabilitation centers. The design features a full frame walker that provides secured and stable support while being foldable and compact. It also has smart features such as telecommunication and patient activity monitoring.",
"title": ""
},
{
"docid": "1eca0e6a170470a483dc25196e6cca63",
"text": "Benchmarks for Cloud Robotics",
"title": ""
},
{
"docid": "987de36823c8dbb9ff13aec4fecd6c9a",
"text": "Previous research has been done on mindfulness and nursing stress but no review has been done to highlight the most up-to-date findings, to justify the recommendation of mindfulness training for the nursing field. The present paper aims to review the relevant studies, derive conclusions, and discuss future direction of research in this field.A total of 19 research papers were reviewed. The majority was intervention studies on the effects of mindfulness-training programs on nursing stress. Higher mindfulness is correlated with lower nursing stress. Mindfulness-based training programs were found to have significant positive effects on nursing stress and psychological well-being. The studies were found to have non-standardized intervention methods, inadequate research designs, small sample size, and lack of systematic follow-up on the sustainability of treatment effects, limiting the generalizability of the results. There is also a lack of research investigation into the underlying mechanism of action of mindfulness on nursing stress. Future research that addresses these limitations is indicated.",
"title": ""
},
{
"docid": "51a2d48f43efdd8f190fd2b6c9a68b3c",
"text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.",
"title": ""
},
{
"docid": "851de4b014dfeb6f470876896b0416b3",
"text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.",
"title": ""
},
{
"docid": "e9103d50d367787a5bfa68a38d6ea059",
"text": "This article proposes the develop of a dynamic virtual environment that with the consumption of real time data about the state of a place, offer an immersion to the tourist like be at the desired location. The development implements a communication and loader structure from many information sources, manual information data loaded from mobile devices and data loader from collecting equipment that get environmental and atmospheric data. The virtual reality application use Google Maps, and worldwide heightmap to get 3D geographic map models; HTC VIVE and Oculus SDK for support virtual reality experience; and weather API to show the weather information from the desired location in real time. In addition, the proposed virtual reality application emphasizes user interaction on the virtual environment by displaying dynamic and up-to-date information about tourism services.",
"title": ""
},
{
"docid": "9e0a28a8205120128938b52ba8321561",
"text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.",
"title": ""
},
{
"docid": "a759ddc24cebbbf0ac71686b179962df",
"text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.",
"title": ""
},
{
"docid": "941df83e65700bc2e5ee7226b96e4f54",
"text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.",
"title": ""
},
{
"docid": "cc1876cf1d71be6c32c75bd2ded25e65",
"text": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies.",
"title": ""
},
{
"docid": "c6a36cd9165d073d037245505f1cf710",
"text": "Most drugs of abuse easily cross the placenta and can affect fetal brain development. In utero exposures to drugs thus can have long-lasting implications for brain structure and function. These effects on the developing nervous system, before homeostatic regulatory mechanisms are properly calibrated, often differ from their effects on mature systems. In this review, we describe current knowledge on how alcohol, nicotine, cocaine, amphetamine, Ecstasy, and opiates (among other drugs) produce alterations in neurodevelopmental trajectory. We focus both on animal models and available clinical and imaging data from cross-sectional and longitudinal human studies. Early studies of fetal exposures focused on classic teratological methods that are insufficient for revealing more subtle effects that are nevertheless very behaviorally relevant. Modern mechanistic approaches have informed us greatly as to how to potentially ameliorate the induced deficits in brain formation and function, but conclude that better delineation of sensitive periods, dose–response relationships, and long-term longitudinal studies assessing future risk of offspring to exhibit learning disabilities, mental health disorders, and limited neural adaptations are crucial to limit the societal impact of these exposures.",
"title": ""
}
] |
scidocsrr
|
d9045cce7af90cef04ea6d41238b7bd1
|
Low-loss 0.13-µm CMOS 50 – 70 GHz SPDT and SP4T switches
|
[
{
"docid": "b929cbcaf8de8e845d1cf7f59d3eca63",
"text": "This paper presents 35 GHz single-pole-single-throw (SPST) and single-pole-double-throw (SPDT) CMOS switches using a 0.13 mum BiCMOS process (IBM 8 HP). The CMOS transistors are designed to have a high substrate resistance to minimize the insertion loss and improve power handling capability. The SPST/SPDT switches have a insertion loss of 1.8 dB/2.2 dB, respectively, and an input 1-dB compression point (P1 dB) greater than 22 dBm. The isolation is greater than 30 dB at 35-40 GHz and is achieved using two parallel resonant networks. To our knowledge, this is the first demonstration of low-loss, high-isolation CMOS switches at Ka-band frequencies.",
"title": ""
}
] |
[
{
"docid": "e9e11d96e26708c380362847094113db",
"text": "Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.",
"title": ""
},
{
"docid": "e460b586a78b334f1faaab0ad77a2a82",
"text": "This paper introduces an allocation and scheduling algorithm that efficiently handles conditional execution in multi-rate embedded system. Control dependencies are introduced into the task graph model. We propose a mutual exclusion detection algorithm that helps the scheduling algorithm to exploit the resource sharing. Allocation and scheduling are performed simultaneously to take advantage of the resource sharing among those mutual exclusive tasks. The algorithm is fast and efficient,and so is suitable to be used in the inner loop of our hardware/software co-synthesis framework which must call the scheduling routine many times.",
"title": ""
},
{
"docid": "94a35547a45c06a90f5f50246968b77e",
"text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.",
"title": ""
},
{
"docid": "f6574fbbdd53b2bc92af485d6c756df0",
"text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.",
"title": ""
},
{
"docid": "153a22e4477a0d6ce98b9a0fba2ab595",
"text": "Uninterruptible power supplies (UPSs) have been used in many installations for critical loads that cannot afford power failure or surge during operation. It is often difficult to upgrade the UPS system as the load grows over time. Due to lower cost and maintenance, as well as ease of increasing system capacity, the parallel operation of modularized small-power UPS has attracted much attention in recent years. In this paper, a new scheme for parallel operation of inverters is introduced. A multiple-input-multiple-output state-space model is developed to describe the parallel-connected inverters system, and a model-predictive-control scheme suitable for paralleled inverters control is proposed. In this algorithm, the control objectives of voltage tracking and current sharing are formulated using a weighted cost function. The effectiveness and the hot-swap capability of the proposed parallel-connected inverters system have been verified with experimental results.",
"title": ""
},
{
"docid": "d0d114e862c2b8aa81ba4c1815b00764",
"text": "It has been commonly acknowledged that the acceptance of a product depends on both its utilitarian and non-utilitarian properties. The non-utilitarian properties can elicit generally pleasurable and particularly playful experiences in the product’s users. Product design needs to improve the support of playful experiences in order to fit in with the users’ multi-faceted needs. However, designing for fun and pleasure is not an easy task, and there is an urgent need in user experience research and design practices to better understand the role of playfulness in overall user experience of the product. In this paper, we present an initial framework of playful experiences which are derived from studies in interactive art and videogames. We conducted a user study to verify that these experiences are valid. We interviewed 13 videogame players about their experiences with games and what triggers these experiences. The results indicate that the players are experiencing the videogames in many different ways which can be categorized using the framework. We propose that the framework could help the design of interactive products from an experience point of view and make them more engaging, attractive, and most importantly, more playful for the users.",
"title": ""
},
{
"docid": "bc1efec6824aae80c9cae7ea2b2c4842",
"text": "State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.",
"title": ""
},
{
"docid": "d0f9bf7511bcaced02838aa1c2d8785b",
"text": "A folksonomy consists of three basic entities, namely users, tags and resources. This kind of social tagging system is a good way to index information, facilitate searches and navigate resources. The main objective of this paper is to present a novel method to improve the quality of tag recommendation. According to the statistical analysis, we find that the total number of tags used by a user changes over time in a social tagging system. Thus, this paper introduces the concept of user tagging status, namely the growing status, the mature status and the dormant status. Then, the determining user tagging status algorithm is presented considering a user’s current tagging status to be one of the three tagging status at one point. Finally, three corresponding strategies are developed to compute the tag probability distribution based on the statistical language model in order to recommend tags most likely to be used by users. Experimental results show that the proposed method is better than the compared methods at the accuracy of tag recommendation.",
"title": ""
},
{
"docid": "00904281e8f6d5770e1ba3ff7febd20b",
"text": "This paper proposes a data-driven method for concept-to-text generation, the task of automatically producing textual output from non-linguistic input. A key insight in our approach is to reduce the tasks of content selection (“what to say”) and surface realization (“how to say”) into a common parsing problem. We define a probabilistic context-free grammar that describes the structure of the input (a corpus of database records and text describing some of them) and represent it compactly as a weighted hypergraph. The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features. We propose a novel decoding algorithm for finding the best scoring derivation and generating in this setting. Experimental evaluation on the ATIS domain shows that our model outperforms a competitive discriminative system both using BLEU and in a judgment elicitation study.",
"title": ""
},
{
"docid": "40043360644ded6950e1f46bd2caaf96",
"text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.",
"title": ""
},
{
"docid": "9692ab0e46c6e370aeb171d3224f5d23",
"text": "With the advent technology of Remote Sensing (RS) and Geographic Information Systems (GIS), a network transportation (Road) analysis within this environment has now become a common practice in many application areas. But a main problem in the network transportation analysis is the less quality and insufficient maintenance policies. This is because of the lack of funds for infrastructure. This demand for information requires new approaches in which data related to transportation network can be identified, collected, stored, retrieved, managed, analyzed, communicated and presented, for the decision support system of the organization. The adoption of newly emerging technologies such as Geographic Information System (GIS) can help to improve the decision making process in this area for better use of the available limited funds. The paper reviews the applications of GIS technology for transportation network analysis.",
"title": ""
},
{
"docid": "4fa99994915bba8621e186a7e6804743",
"text": "We address the problem of synthesizing a robust data-extractor from a family of websites that contain the same kind of information. This problem is common when trying to aggregate information from many web sites, for example, when extracting information for a price-comparison site.\n Given a set of example annotated web pages from multiple sites in a family, our goal is to synthesize a robust data extractor that performs well on all sites in the family (not only on the provided example pages). The main challenge is the need to trade off precision for generality and robustness. Our key contribution is the introduction of forgiving extractors that dynamically adjust their precision to handle structural changes, without sacrificing precision on the training set.\n Our approach uses decision tree learning to create a generalized extractor and converts it into a forgiving extractor, inthe form of an XPath query. The forgiving extractor captures a series of pruned decision trees with monotonically decreasing precision, and monotonically increasing recall, and dynamically adjusts precision to guarantee sufficient recall. We have implemented our approach in a tool called TREEX and applied it to synthesize extractors for real-world large scale web sites. We evaluate the robustness and generality of the forgiving extractors by evaluating their precision and recall on: (i) different pages from sites in the training set (ii) pages from different versions of sites in the training set (iii) pages from different (unseen) sites. We compare the results of our synthesized extractor to those of classifier-based extractors, and pattern-based extractors, and show that TREEX significantly improves extraction accuracy.",
"title": ""
},
{
"docid": "2c3e6373feb4352a68ec6fd109df66e0",
"text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.",
"title": ""
},
{
"docid": "d59b7281c896bcd99902b8fb13951f98",
"text": "The History of financial services in Tanzania shows that there were poor financial services before and soon after the independence. The financial services improved slowly after the liberalization of financial services in the 1990s. This paper uses the empirical literature to compare the current financial services providers serving the middle and lower income groups in Tanzania. The analysis of findings indicates that cooperative financial institutions (VICOBA and SACCOS) and mobile money services serve the majority of Tanzanians both in rural and urban areas. The paper recommends that policymakers should favor the semi-formal MFIs to enable them to serve the majority of Tanzanians and the security of mobile monetary transactions should be strengthened since is the most reliable monetary services used by all categories of Tanzanians throughout the country.",
"title": ""
},
{
"docid": "8a4b1c87b85418ce934f16003a481f27",
"text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.",
"title": ""
},
{
"docid": "d8c5ff196db9acbea12e923b2dcef276",
"text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.",
"title": ""
},
{
"docid": "b97c9e8238f74539e8a17dcffecdd35f",
"text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.",
"title": ""
},
{
"docid": "3e01af44d4819d8c78615e66f56e5983",
"text": "The amount of dynamic content on the web has been steadily increasing. Scripting languages such as JavaScript and browser extensions such as Adobe's Flash have been instrumental in creating web-based interfaces that are similar to those of traditional applications. Dynamic content has also become popular in advertising, where Flash is used to create rich, interactive ads that are displayed on hundreds of millions of computers per day. Unfortunately, the success of Flash-based advertisements and applications attracted the attention of malware authors, who started to leverage Flash to deliver attacks through advertising networks. This paper presents a novel approach whose goal is to automate the analysis of Flash content to identify malicious behavior. We designed and implemented a tool based on the approach, and we tested it on a large corpus of real-world Flash advertisements. The results show that our tool is able to reliably detect malicious Flash ads with limited false positives. We made our tool available publicly and it is routinely used by thousands of users.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] |
scidocsrr
|
78140758171ada124132bbeac9aa671c
|
Low-Rank Matrix Completion by Riemannian Optimization
|
[
{
"docid": "19518604892789208000e970747d0c3d",
"text": "Given a partial symmetric matrixA with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that makeA a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interiorpoint algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.",
"title": ""
}
] |
[
{
"docid": "6b846d082123ca7319af0a4321f45a86",
"text": "Mutations that exaggerate signalling of the receptor tyrosine kinase fibroblast growth factor receptor 3 (FGFR3) give rise to achondroplasia, the most common form of dwarfism in humans. Here we review the clinical features, genetic aspects and molecular pathogenesis of achondroplasia and examine several therapeutic strategies designed to target the mutant receptor or its signalling pathways, including the use of kinase inhibitors, blocking antibodies, physiologic antagonists, RNAi and chaperone inhibitors. We conclude by discussing the challenges of treating growth plate disorders in children.",
"title": ""
},
{
"docid": "9b8ba583adc6df6e02573620587be68a",
"text": "BACKGROUND\nTraditional one-session exposure therapy (OST) in which a patient is gradually exposed to feared stimuli for up to 3 h in a one-session format has been found effective for the treatment of specific phobias. However, many individuals with specific phobia are reluctant to seek help, and access to care is lacking due to logistic challenges of accessing, collecting, storing, and/or maintaining stimuli. Virtual reality (VR) exposure therapy may improve upon existing techniques by facilitating access, decreasing cost, and increasing acceptability and effectiveness. The aim of this study is to compare traditional OST with in vivo spiders and a human therapist with a newly developed single-session gamified VR exposure therapy application with modern VR hardware, virtual spiders, and a virtual therapist.\n\n\nMETHODS/DESIGN\nParticipants with specific phobia to spiders (N = 100) will be recruited from the general public, screened, and randomized to either VR exposure therapy (n = 50) or traditional OST (n = 50). A behavioral approach test using in vivo spiders will serve as the primary outcome measure. Secondary outcome measures will include spider phobia questionnaires and self-reported anxiety, depression, and quality of life. Outcomes will be assessed using a non-inferiority design at baseline and at 1, 12, and 52 weeks after treatment.\n\n\nDISCUSSION\nVR exposure therapy has previously been evaluated as a treatment for specific phobias, but there has been a lack of high-quality randomized controlled trials. A new generation of modern, consumer-ready VR devices is being released that are advancing existing technology and have the potential to improve clinical availability and treatment effectiveness. The VR medium is also particularly suitable for taking advantage of recent phobia treatment research emphasizing engagement and new learning, as opposed to physiological habituation. This study compares a market-ready, gamified VR spider phobia exposure application, delivered using consumer VR hardware, with the current gold standard treatment. Implications are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier NCT02533310. Registered on 25 August 2015.",
"title": ""
},
{
"docid": "646f6456904a6ffe968c0f79a5286f65",
"text": "Both ray tracing and point-based representations provide means to efficiently display very complex 3D models. Computational efficiency has been the main focus of previous work on ray tracing point-sampled surfaces. For very complex models efficient storage in the form of compression becomes necessary in order to avoid costly disk access. However, as ray tracing requires neighborhood queries, existing compression schemes cannot be applied because of their sequential nature. This paper introduces a novel acceleration structure called the quantized kd-tree, which offers both efficient traversal and storage. The gist of our new representation lies in quantizing the kd-tree splitting plane coordinates. We show that the quantized kd-tree reduces the memory footprint up to 18 times, not compromising performance. Moreover, the technique can also be employed to provide LOD (level-of-detail) to reduce aliasing problems, with little additional storage cost",
"title": ""
},
{
"docid": "59791087d518577c20708e544a5eec26",
"text": "This paper proposes an innovative fraud detection method, built upon existing fraud detection research and Minority Report, to deal with the data mining problem of skewed data distributions. This method uses backpropagation (BP), together with naive Bayesian (NB) and C4.5 algorithms, on data partitions derived from minority oversampling with replacement. Its originality lies in the use of a single meta-classifier (stacking) to choose the best base classifiers, and then combine these base classifiers' predictions (bagging) to improve cost savings (stacking-bagging). Results from a publicly available automobile insurance fraud detection data set demonstrate that stacking-bagging performs slightly better than the best performing bagged algorithm, C4.5, and its best classifier, C4.5 (2), in terms of cost savings. Stacking-bagging also outperforms the common technique used in industry (BP without both sampling and partitioning). Subsequently, this paper compares the new fraud detection method (meta-learning approach) against C4.5 trained using undersampling, oversampling, and SMOTEing without partitioning (sampling approach). Results show that, given a fixed decision threshold and cost matrix, the partitioning and multiple algorithms approach achieves marginally higher cost savings than varying the entire training data set with different class distributions. The most interesting find is confirming that the combination of classifiers to produce the best cost savings has its contributions from all three algorithms.",
"title": ""
},
{
"docid": "e7772ed75853d4d16641b41ad2abdcfe",
"text": "A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models",
"title": ""
},
{
"docid": "ecaf322e67c43b7d54a05de495a443eb",
"text": "Recently, considerable effort has been devoted to deep domain adaptation in computer vision and machine learning communities. However, most of existing work only concentrates on learning shared feature representation by minimizing the distribution discrepancy across different domains. Due to the fact that all the domain alignment approaches can only reduce, but not remove the domain shift, target domain samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the hyperplane learned from the source domain. To alleviate this issue, we propose to joint domain alignment and discriminative feature learning, which could benefit both domain alignment and final classification. Specifically, an instance-based discriminative feature learning method and a center-based discriminative feature learning method are proposed, both of which guarantee the domain invariant features with better intra-class compactness and inter-class separability. Extensive experiments show that learning the discriminative features in the shared feature space can significantly boost the performance of deep domain adaptation methods.",
"title": ""
},
{
"docid": "03097e1239e5540fe1ec45729d1cbbc2",
"text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQ’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQ. In particular, we tested PGQ on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"title": ""
},
{
"docid": "4bee6ec901c365f3780257ed62b7c020",
"text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.",
"title": ""
},
{
"docid": "d7e6b07fee74d6efd97733ac0b22f92c",
"text": "Low level optimisations from conventional compiler technology often give very poor results when applied to code from lazy functional languages, mainly because of the completely diierent structure of the code, unknown control ow, etc. A novel approach to compiling laziness is needed. We describe a complete back end for lazy functional languages, which uses various interprocedural optimisations to produce highly optimised code. The main features of our new back end are the following. It uses a monadic intermediate code, called GRIN (Graph Reduction Intermediate Notation). This code has a very functional avourr, making it well suited for analysis and program transformations, but at the same time provides the low levell machinery needed to express many concrete implementation concerns. Using a heap points-to analysis, we are able to eliminate most unknown control ow due to evals (i.e., forcing of closures) and applications of higher order functions, in the program. A transformation machinery uses many, each very simple, GRIN program transformations to optimise the intermediate code. Eventually, the GRIN code is translated into RISC machine code, and we apply an interpro-cedural register allocation algorithm, followed by many other low level optimisations. The elimination of unknown control ow, made earlier, will help a lot in making the low level optimisations work well. Preliminary measurements look very promising: we are currently twice as fast as the Glasgow Haskell Compiler for some small programs. Our approach still gives us many opportunities for further optimisations (though yet unexplored).",
"title": ""
},
{
"docid": "fca63f719115e863f5245f15f6b1be50",
"text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.",
"title": ""
},
{
"docid": "87f7c3cfe6ca262e1f8716bf8ee16d2b",
"text": "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.",
"title": ""
},
{
"docid": "4feab0c5f92502011ed17a425b0f800b",
"text": "This paper gives an insight of how we can store healthcare data digitally like patient's records as an Electronic Health Record (EHR) and how we can generate useful information from these records by using analytics techniques and tools which will help in saving time and money of patients as well as the doctors. This paper is fully focused towards the Maharaja Yeshwantrao Hospital (M.Y.) located in Indore, Madhya Pradesh, India. M.Y hospital is the central India's largest government hospital. It generates large amount of heterogeneous data from different sources like patients health records, laboratory test result, electronic medical equipment, health insurance data, social media, drug research, genome research, clinical outcome, transaction and from Mahatma Gandhi Memorial medical college which is under MY hospital. To manage this data, data analytics may be used to make it useful for retrieval. Hence the concept of \"big data\" can be applied. Big data is characterized as extremely large data sets that can be analysed computationally to find patterns, trends, and associations, visualization, querying, information privacy and predictive analytics on large wide spread collection of data. Big data analytics can be done using Hadoop which plays an effective role in performing meaningful real-time analysis on the large volume of this data to predict the emergency situations before it happens. This paper also discusses about the EHR and the big data usage and its analytics at M.Y. hospital.",
"title": ""
},
{
"docid": "a5999023893d996f0485abcf991ffbe1",
"text": "In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.",
"title": ""
},
{
"docid": "c533f33f95fd993e3bceffab85e9d851",
"text": "Deep Convolutional Neural Networks (CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely 2PFPCE, to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies – global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Twophase pruning, that is, combining both global and layer-wise strategies, achieves ∼ 10× FLOPs reduction and 46% inference time reduction on VGG-16, with 2% accuracy drop.",
"title": ""
},
{
"docid": "e5687e8ac3eb1fbac18d203c049d9446",
"text": "Information security policy compliance is one of the key concerns that face organizations today. Although, technical and procedural security measures help improve information security, there is an increased need to accommodate human, social and organizational factors. While employees are considered the weakest link in information security domain, they also are assets that organizations need to leverage effectively. Employees' compliance with Information Security Policies (ISPs) is critical to the success of an information security program. The purpose of this research is to develop a measurement tool that provides better measures for predicting and explaining employees' compliance with ISPs by examining the role of information security awareness in enhancing employees' compliance with ISPs. The study is the first to address compliance intention from a users' perspective. Overall, analysis results indicate strong support for the proposed instrument and represent an early confirmation for the validation of the underlying theoretical model.",
"title": ""
},
{
"docid": "97310173da47afec3cb3af2c3f985079",
"text": "While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time and only a blackbox perspective of the model itself. Detecting this type of attack is challenging because the unexpected behavior occurs only when a backdoor trigger, which is known only to the adversary, is present. Model users, either direct users of training data or users of pre-trained model from a catalog, may not guarantee the safe operation of their ML-based system. In this paper, we propose a novel approach to backdoor detection and removal for neural networks. Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images. To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.",
"title": ""
},
{
"docid": "1a143ebc85d6284c075dd1fc915a56c8",
"text": "Neural models assist in characterizing the processes carried out by cortical and hippocampal memory circuits. Recent models of memory have addressed issues including recognition and recall dynamics, sequences of activity as the unit of storage, and consolidation of intermediate-term episodic memory into long-term memory.",
"title": ""
},
{
"docid": "7d687eb0a853c2faed5d4109f3cdb023",
"text": "This paper presents a new method for vehicle logo detection and recognition from images of front and back views of vehicle. The proposed method is a two-stage scheme which combines Convolutional Neural Network (CNN) and Pyramid of Histogram of Gradient (PHOG) features. CNN is applied as the first stage for candidate region detection and recognition of the vehicle logos. Then, PHOG with Support Vector Machine (SVM) classifier is employed in the second stage to verify the results from the first stage. Experiments are performed with dataset of vehicle images collected from internet. The results show that the proposed method can accurately locate and recognize the vehicle logos with higher robustness in comparison with the other conventional schemes. The proposed methods can provide up to 100% in recall, 96.96% in precision and 99.99% in recognition rate in dataset of 20 classes of the vehicle logo.",
"title": ""
},
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
},
{
"docid": "610769d8ac53d5708f3a699f3f4436f9",
"text": "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.",
"title": ""
}
] |
scidocsrr
|
6d761f362f01fefc189f6720ef654d66
|
SafeRoute: Learning to Navigate Streets Safely in an Urban Environment
|
[
{
"docid": "2b57b32fcb378fe6a9a78699142d36c6",
"text": "Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation (“I am here”) and a representation of the goal (“I am going there”). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. A key contribution of this paper is an interactive navigation environment that uses Google Street View for its photographic content and worldwide coverage. Our baselines demonstrate that deep reinforcement learning agents can learn to navigate in multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarizing our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn.",
"title": ""
},
{
"docid": "318514ff3b6fc3d60fbb403c5db28687",
"text": "Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years; however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations, without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction, and computer games, to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this article, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications, and highlight current and future research directions.",
"title": ""
},
{
"docid": "8092fcd0f4beae6f26fa40a78d1408aa",
"text": "Existing research studies on vision and language grounding for robot navigation focus on improving model-free deep reinforcement learning (DRL) models in synthetic environments. However, model-free DRL models do not consider the dynamics in the real-world environments, and they often fail to generalize to new scenes. In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices—We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task. Our look-ahead module tightly integrates a look-ahead policy model with an environment model that predicts the next state and the reward. Experimental results suggest that our proposed method significantly outperforms the baselines and achieves the best on the real-world Room-toRoom dataset. Moreover, our scalable method is more generalizable when transferring to unseen environments.",
"title": ""
}
] |
[
{
"docid": "4eead577c1b3acee6c93a62aee8a6bb5",
"text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.",
"title": ""
},
{
"docid": "262be71d64eef2534fab547ec3db6b9a",
"text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.",
"title": ""
},
{
"docid": "8c467cec76d31fee70e8206769b121c3",
"text": "Color preference is an important aspect of visual experience, but little is known about why people in general like some colors more than others. Previous research suggested explanations based on biological adaptations [Hurlbert AC, Ling YL (2007) Curr Biol 17:623-625] and color-emotions [Ou L-C, Luo MR, Woodcock A, Wright A (2004) Color Res Appl 29:381-389]. In this article we articulate an ecological valence theory in which color preferences arise from people's average affective responses to color-associated objects. An empirical test provides strong support for this theory: People like colors strongly associated with objects they like (e.g., blues with clear skies and clean water) and dislike colors strongly associated with objects they dislike (e.g., browns with feces and rotten food). Relative to alternative theories, the ecological valence theory both fits the data better (even with fewer free parameters) and provides a more plausible, comprehensive causal explanation of color preferences.",
"title": ""
},
{
"docid": "2c38b6af96d8393660c4c700b9322f7a",
"text": "According to what we call the Principle of Procreative Beneficence (PB),couples who decide to have a child have a significant moral reason to select the child who, given his or her genetic endowment, can be expected to enjoy the most well-being. In the first part of this paper, we introduce PB,explain its content, grounds, and implications, and defend it against various objections. In the second part, we argue that PB is superior to competing principles of procreative selection such as that of procreative autonomy.In the third part of the paper, we consider the relation between PB and disability. We develop a revisionary account of disability, in which disability is a species of instrumental badness that is context- and person-relative.Although PB instructs us to aim to reduce disability in future children whenever possible, it does not privilege the normal. What matters is not whether future children meet certain biological or statistical norms, but what level of well-being they can be expected to have.",
"title": ""
},
{
"docid": "7e994507b7d1986bbc02411b221e9223",
"text": "Users of online social networks voluntarily participate in different user groups or communities. Researches suggest the presence of strong local community structure in these social networks, i.e., users tend to meet other people via mutual friendship. Recently, different approaches have considered communities structure information for increasing the link prediction accuracy. Nevertheless, these approaches consider that users belong to just one community. In this paper, we propose three measures for the link prediction task which take into account all different communities that users belong to. We perform experiments for both unsupervised and supervised link prediction strategies. The evaluation method considers the links imbalance problem. Results show that our proposals outperform state-of-the-art unsupervised link prediction measures and help to improve the link prediction task approached as a supervised strategy.",
"title": ""
},
{
"docid": "5063a63d425b5ceebbadfbab14a0a75d",
"text": "Two studies investigated young infants' use of the word-learning principle Mutual Exclusivity. In Experiment 1, a linear relationship between age and performance was discovered. Seventeen-month-old infants successfully used Mutual Exclusivity to map novel labels to novel objects in a preferential looking paradigm. That is, when presented a familiar and a novel object (e.g. car and phototube) and asked to \"look at the dax\", 17-month-olds increased looking to the novel object (i.e. phototube) above baseline preference. On these trials, 16-month-olds were at chance. And, 14-month-olds systematically increased looking to the familiar object (i.e. car) in response to hearing the novel label \"dax\". Experiment 2 established that this increase in looking to the car was due solely to hearing the novel label \"dax\". Several possible interpretations of the surprising form of failure at 14 months are discussed.",
"title": ""
},
{
"docid": "fad6716fef303435fd3724364ebd2741",
"text": "1567-4223/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.elerap.2011.06.003 ⇑ Corresponding author. Tel.: +82 10 8976 4410. E-mail addresses: kimhw@yonsei.ac.kr (H.-W. K (Y. Xu), sumeetgupta@ssitm.ac.in (S. Gupta). 1 Tel.: +86 21 2501 1198. 2 Tel.: +91 788 2291621. 3 There are two types of products: search product experience product (i.e., high touch product) (Klein 19 those where there is great variation is product qua luxurious items, and apparels. Search products are th quality does not vary across stores. For example, bo quality and therefore their quality is standard do not v store. Price and trust are considered to be two important factors that influence customer purchasing decisions in Internet shopping. This paper examines the relative influence they have on online purchasing decisions for both potential and repeat customers. The knowledge of their relative impacts and changes in their relative roles over customer transaction experience is useful in developing customized sales strategies to target different groups of customers. The results of this study revealed that perceived trust exerted a stronger effect than perceived price on purchase intentions for both potential and repeat customers of an online store. The results also revealed that perceived price exerted a stronger influence on purchase decisions of repeat customers as compared to that of potential customers. Perceived trust exerted a stronger influence on purchase decisions of potential customers as compared to that of repeat customers. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1245c626f26dd7fe799d862b6f56a6af",
"text": "The emergence of cloud services brings new possibilities for constructing and using HPC platforms. However, while cloud services provide the flexibility and convenience of customized, pay-as-you-go parallel computing, multiple previous studies in the past three years have indicated that cloud-based clusters need a significant performance boost to become a competitive choice, especially for tightly coupled parallel applications.\n In this work, we examine the feasibility of running HPC applications in clouds. This study distinguishes itself from existing investigations in several ways: 1) We carry out a comprehensive examination of issues relevant to the HPC community, including performance, cost, user experience, and range of user activities. 2) We compare an Amazon EC2-based platform built upon its newly available HPC-oriented virtual machines with typical local cluster and supercomputer options, using benchmarks and applications with scale and problem size unprecedented in previous cloud HPC studies. 3) We perform detailed performance and scalability analysis to locate the chief limiting factors of the state-of-the-art cloud based clusters. 4) We present a case study on the impact of per-application parallel I/O system configuration uniquely enabled by cloud services. Our results reveal that though the scalability of EC2-based virtual clusters still lags behind traditional HPC alternatives, they are rapidly gaining in overall performance and cost-effectiveness, making them feasible candidates for performing tightly coupled scientific computing. In addition, our detailed benchmarking and profiling discloses and analyzes several problems regarding the performance and performance stability on EC2.",
"title": ""
},
{
"docid": "77555e0c16077cfa50682de2669b9abd",
"text": "The demand for knowledge extraction has been increasing. With the growing amount of data being generated by global data sources (e.g., social media and mobile apps) and the popularization of context-specific data (e.g., the Internet of Things), companies and researchers need to connect all these data and extract valuable information. Machine learning has been gaining much attention in data mining, leveraging the birth of new solutions. This paper proposes an architecture to create a flexible and scalable machine learning as a service. An open source solution was implemented and presented. As a case study, a forecast of electricity demand was generated using real-world sensor and weather data by running different algorithms at the same time.",
"title": ""
},
{
"docid": "2a361656df03b330abe665e6f40559aa",
"text": "Sentiment analysis plays a big role in brand and product positioning, consumer attitude detection, market research and customer relationship management. Essential part of information-gathering for market research is to find the opinion of people about the product. With availability and popularity of like online review sites and personal blogs, more chances and challenges arise as people now can, and do use information technologies to understand others opinions. In this paper, a Multi-Layer Perceptron (MLP) is used to classify the features extracted from the movie reviews. A Decision Tree-based Feature Ranking is proposed for feature selection. The ranking is based on Manhattan Hierarchical Cluster Criterion In the proposed feature selection; a decision tree induction selects relevant features. Decision tree induction constructs a tree structure with internal nodes denoting an attribute test with the branch representing test outcome and external node denotes class prediction. In this paper, a hybrid algorithm based on Differential Evolution (DE) and Genetic Algorithm (GA) for weight optimization algorithm to optimize MLPNN is proposed. IMDb dataset is used to evaluate the proposed method. Experimental results showed that the MLP with proposed feature selection improves the performance of MLP significantly by 3.96% to 6.56%. Classification accuracy of 81.25% was achieved when 70 or 90 features were selected.",
"title": ""
},
{
"docid": "148b3fa74867f67fa1a7196b3a10038a",
"text": "Sentiment analysis of customer reviews has a crucial impact on a business's development strategy. Despite the fact that a repository of reviews evolves over time, sentiment analysis often relies on offline solutions where training data is collected before the model is built. If we want to avoid retraining the entire model from time to time, incremental learning becomes the best alternative solution for this task. In this work, we present a variant of online random forests to perform sentiment analysis on customers' reviews. Our model is able to achieve accuracy similar to offline methods and comparable to other online models.",
"title": ""
},
{
"docid": "ea8290fda2918a4618b268db502b9e69",
"text": "Managing raw alerts generated by various sensors are becoming of more significance to intrusion detection systems as more sensors with different capabilities are distributed spatially in the network. Alert Correlation addresses this issue by reducing, fusing and correlating raw alerts to provide a condensed, yet more meaningful view of the network from the intrusion standpoint. Techniques from a divers range of disciplines have been used by researchers for different aspects of correlation. This paper provides a survey of the state of the art in alert correlation techniques. Our main contribution is a two-fold classification of literature based on correlation framework and applied techniques. The previous works in each category have been described alongside with their strengths and weaknesses from our viewpoint.",
"title": ""
},
{
"docid": "38624083e36ff9f2ea988de0eb685528",
"text": "We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.",
"title": ""
},
{
"docid": "6ccb8a904748cbb263f9edb6cf82ff92",
"text": "IMPORTANCE\nThe Affordable Care Act is the most important health care legislation enacted in the United States since the creation of Medicare and Medicaid in 1965. The law implemented comprehensive reforms designed to improve the accessibility, affordability, and quality of health care.\n\n\nOBJECTIVES\nTo review the factors influencing the decision to pursue health reform, summarize evidence on the effects of the law to date, recommend actions that could improve the health care system, and identify general lessons for public policy from the Affordable Care Act.\n\n\nEVIDENCE\nAnalysis of publicly available data, data obtained from government agencies, and published research findings. The period examined extends from 1963 to early 2016.\n\n\nFINDINGS\nThe Affordable Care Act has made significant progress toward solving long-standing challenges facing the US health care system related to access, affordability, and quality of care. Since the Affordable Care Act became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015, primarily because of the law's reforms. Research has documented accompanying improvements in access to care (for example, an estimated reduction in the share of nonelderly adults unable to afford care of 5.5 percentage points), financial security (for example, an estimated reduction in debts sent to collection of $600-$1000 per person gaining Medicaid coverage), and health (for example, an estimated reduction in the share of nonelderly adults reporting fair or poor health of 3.4 percentage points). The law has also begun the process of transforming health care payment systems, with an estimated 30% of traditional Medicare payments now flowing through alternative payment models like bundled payments or accountable care organizations. These and related reforms have contributed to a sustained period of slow growth in per-enrollee health care spending and improvements in health care quality. Despite this progress, major opportunities to improve the health care system remain.\n\n\nCONCLUSIONS AND RELEVANCE\nPolicy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs. Although partisanship and special interest opposition remain, experience with the Affordable Care Act demonstrates that positive change is achievable on some of the nation's most complex challenges.",
"title": ""
},
{
"docid": "a839d9e4a80d9a8715119bc53eddbce1",
"text": "Reliable and comprehensive measurement data from large-scale fire tests is needed for validation of computer fire models, but is subject to various uncertainties, including radiation errors in temperature measurement. Here, a simple method for post-processing thermocouple data is demonstrated, within the scope of a series of large-scale fire tests, in order to establish a well characterised dataset of physical parameter values which can be used with confidence in model validation. Sensitivity analyses reveal the relationship of the correction uncertainty to the assumed optical properties and the thermocouple distribution. The analysis also facilitates the generation of maps of an equivalent radiative flux within the fire compartment, a quantity which usefully characterises the thermal exposures of structural components. Large spatial and temporal variations are found, with regions of most severe exposures not being collocated with the peak gas temperatures; this picture is at variance with the assumption of uniform heating conditions often adopted for post-flashover fires.",
"title": ""
},
{
"docid": "d685e84f8ddc55f2391a9feffc88889f",
"text": "Little is known about how Agile developers and UX designers integrate their work on a day-to-day basis. While accounts in the literature attempt to integrate Agile development and UX design by combining their processes and tools, the contradicting claims found in the accounts complicate extracting advice from such accounts. This paper reports on three ethnographically-informed field studies of the day-today practice of developers and designers in organisational settings. Our results show that integration is achieved in practice through (1) mutual awareness, (2) expectations about acceptable behaviour, (3) negotiating progress and (4) engaging with each other. Successful integration relies on practices that support and maintain these four aspects in the day-to-day work of developers and designers.",
"title": ""
},
{
"docid": "c9f6de422e349ac1319b1017d2a6547b",
"text": "This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short-term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long-term impacts are complex. The evaluation of long-term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time-neutral aggregate of well-being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well-founded opinion on the matter. Policy Implications • The global desirability of openness in AI development – sharing e.g. source code, algorithms, or scientific insights – depends – on complex tradeoffs. • A central concern is that openness could exacerbate a racing dynamic: competitors trying to be the first to develop advanced (superintelligent) AI may accept higher levels of existential risk in order to accelerate progress. • Openness may reduce the probability of AI benefits being monopolized by a small group, but other potential political consequences are more problematic. • Partial openness that enables outsiders to contribute to an AI project’s safety work and to supervise organizational plans and goals appears desirable. The goal of this paper is to conduct a preliminary analysis of the long-term strategic implications of openness in AI development. What effects would increased openness in AI development have, on the margin, on the long-term impacts of AI? Is the expected value for society of these effects positive or negative? Since it is typically impossible to provide definitive answers to this type of question, our ambition here is more modest: to introduce some relevant considerations and develop some thoughts on their weight and plausibility. Given recent interest in the topic of openness in AI and the absence (to our knowledge) of any academic work directly addressing this issue, even this modest ambition would offer scope for a worthwhile contribution. Openness in AI development can refer to various things. For example, we could use this phrase to refer to open source code, open science, open data, or to openness about safety techniques, capabilities, and organizational goals, or to a non-proprietary development regime generally. We will have something to say about each of those different aspects of openness – they do not all have the same strategic implications. But unless we specify otherwise, we will use the shorthand ‘openness’ to refer to the practice of releasing into the public domain (continuously and as promptly as is practicable) all relevant source code and platforms and publishing freely about algorithms and scientific insights and ideas gained in the course of the research. Currently, most leading AI developers operate with a high but not maximal degree of openness. AI researchers at Google, Facebook, Microsoft and Baidu regularly present their latest work at technical conferences and post it on preprint servers. So do researchers in academia. Sometimes, but not always, these publications are accompanied by a release of source code, which makes it easier for outside researchers to replicate the work and build on it. Each of the aforementioned companies have developed and released under open source licences source code for platforms that help researchers (and students and other interested folk) implement machine learning architectures. The movement of staff and interns is another important vector for the spread of ideas. The recently announced OpenAI initiative even has openness explicitly built into its brand identity. Global Policy (2017) doi: 10.1111/1758-5899.12403 © 2017 The Authors Global Policy published by Durham University and John Wiley & Sons, Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Global Policy",
"title": ""
},
{
"docid": "260e574e9108e05b98df7e4ed489e5fc",
"text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.",
"title": ""
},
{
"docid": "6a6f7493f38248b06fe67039143bda82",
"text": "Time series forecasting techniques have been widely applied in domains such as weather forecasting, electric power demand forecasting, earthquake forecasting, and financial market forecasting. Because of the fact that these time series are affected by a multitude of interrelating macroscopic and microscopic variables, the underlying models that generate these time series are nonlinear and extremely complex. Therefore, it is computationally infeasible to develop full-scale models with the present computing technology. Therefore, researchers have resorted to smaller-scale models that require frequent recalibration. Despite advances in forecasting technology over the past few decades, there have not been algorithms that can consistently produce accurate forecasts with statistical significance. This is mainly because state-of-the-art forecasting algorithms essentially perform single-horizon forecasts and produce continuous numbers as outputs. This paper proposes a novel multi-horizon ternary forecasting algorithm that forecasts whether a time series is heading for an uptrend or downtrend, or going sideways. The proposed system utilizes a cascade of support vector machines, each of which is trained to forecast a specific horizon. Individual forecasts of these support vector machines are combined to form an extrapolated time series. A higher level forecasting system then forward-runs the extrapolated time series and then forecasts the future trend of the input time series in accordance with some volatility measure. Experiments have been carried out on some datasets. Over these datasets, this system achieves accuracy rates well above the baseline accuracy rate, implying statistical significance. The experimental results demonstrate the efficacy of our framework.",
"title": ""
},
{
"docid": "2cfc7eeae3259a43a24ef56932d8b27f",
"text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.",
"title": ""
}
] |
scidocsrr
|
b72e173b5fad75e08ff6d4676fca5c3b
|
Bus Architectures for Safety-Critical Embedded Systems
|
[
{
"docid": "ed6d9fd7ef8ec0f2509b6dec0ea4f77b",
"text": "Avionics and control systems for aircraft use distributed, fault-tolerant computer systems to provide safety-critical functions such as flight and e gine control. These systems are becomingmodular, meaning that they are based on standardized architectures and components, andintegrated, meaning that some of the components are shared by different functions—of possibly different criticality levels. The modular architectures that support these functions mus t provide mechanisms for coordinating the distributed components that provide a sin gle function (e.g., distributing sensor readings and actuator commands appropriately, and a ssisting replicated components to perform the function in a fault-tolerant manner), while p rotecting functions from faults in each other. Such an architecture must tolerate hardware f aults in its own components and must provide very strong guarantees on the correctness and r eliability of its own mechanisms and services. One of the essential services provided by this kind of modula r architecture is communication of information from one distributed component to ano ther, so a (physical or logical) communication bus is one of its principal components, and th e protocols used for control and communication on the bus are among its principal mechani sms. Consequently, these architectures are often referred to as buses(or databuses ), although this term understates their complexity, sophistication, and criticality. The capabilities once found in aircraft buses are becoming a vailable in buses aimed at the automobile market, where the economies of scale ensure l ow prices. The low price of the automobile buses then renders them attractive to certai n aircraft applications—provided they can achieve the safety required. In this report, I describe and compare the architectures of t w avionics and two automobile buses in the interest of deducing principles common t o all of them, the main differences in their design choices, and the tradeoffs made. The av ionics buses considered are the Honeywell SAFEbus (the backplane data bus used in the Boe ing 777 Airplane Information Management System) and the NASA SPIDER (an architectur being developed as a demonstrator for certification under the new DO-254 guideli nes); the automobile buses considered are the TTTech Time-Triggered Architecture (TTA), recently adopted by Audi for automobile applications, and by Honeywell for avionics and ircraft control functions, and FlexRay, which is being developed by a consortium of BMW, Dai mlerChrysler, Motorola, and Philips. I consider these buses from the perspective of their fault hy potheses, mechanisms, services, and assurance.",
"title": ""
}
] |
[
{
"docid": "78bd1c7ea28a4af60991b56ccd658d7f",
"text": "The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"title": ""
},
{
"docid": "a73b9ce3d0808177c9f0739b67a1a3f3",
"text": "Multiword expressions (MWEs) are lexical items that can be decomposed into multiple component words, but have properties that are unpredictable with respect to their component words. In this paper we propose the first deep learning models for token-level identification of MWEs. Specifically, we consider a layered feedforward network, a recurrent neural network, and convolutional neural networks. In experimental results we show that convolutional neural networks are able to outperform the previous state-of-the-art for MWE identification, with a convolutional neural network with three hidden layers giving the best performance.",
"title": ""
},
{
"docid": "fc70a1820f838664b8b51b5adbb6b0db",
"text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.",
"title": ""
},
{
"docid": "3a8f14166954036f85914183dd7a7ee4",
"text": "Abused and nonabused child witnesses to parental violence temporarily residing in a battered women's shelter were compared to children from a similar economic background on measures of self-esteem, anxiety, depression, and behavior problems, using mothers' and self-reports. Results indicated significantly more distress in the abused-witness children than in the comparison group, with nonabused witness children's scores falling between the two. Age of child and types of violence were mediating factors. Implications of the findings are discussed.",
"title": ""
},
{
"docid": "2f270908a1b4897b7d008d9673e3300b",
"text": "The implementation of 3D stereo matching in real time is an important problem for many vision applications and algorithms. The current work, extending previous results by the same authors, presents in detail an architecture which combines the methods of Absolute Differences, Census, and Belief Propagation in an integrated architecture suitable for implementation with Field Programmable Gate Array (FPGA) logic. Emphasis on the present work is placed on the justification of dimensioning the system, as well as detailed design and testing information for a fully placed and routed design to process 87 frames per sec (fps) in 1920 × 1200 resolution, and a fully implemented design for 400 × 320 which runs up to 1570 fps.",
"title": ""
},
{
"docid": "55fdf6b013aa8e4082137a4c84a2873d",
"text": "The Named Data Networking (NDN) project is emerging as one of the most promising information-centric future Internet architectures. Besides NDN recognized potential as a content retrieval solution in wired and wireless domains, its innovative concepts, such as named content, name-based routing and in-network caching, particularly suit the requirements of Internet of Things (IoT) interconnecting billions of heterogeneous objects. IoT highly differs from today's Internet due to resource-constrained devices, massive volumes of small exchanged data, and traffic type diversity. The study in this paper addresses the design of a high-level NDN architecture, whose main components are overhauled to specifically meet the IoT challenges.",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "0d1f88dbd4a04748a83fe741a86518c1",
"text": "The focus of this paper is to investigate how writing computer programs can help children develop their storytelling and creative writing abilities. The process of writing a program---coding---has long been considered only in terms of computer science, but such coding is also reflective of the imaginative and narrative elements of fiction writing workshops. Writing to program can also serve as programming to write, in which a child learns the importance of sequence, structure, and clarity of expression---three aspects characteristic of effective coding and good storytelling alike. While there have been efforts examining how learning to write code can be facilitated by storytelling, there has been little exploration as to how such creative coding can also be directed to teach students about the narrative and storytelling process. Using the introductory programming language Scratch, this paper explores the potential of having children create their own digital stories with the software and how the narrative structure of these stories offers kids the opportunity to better understand the process of expanding an idea into the arc of a story.",
"title": ""
},
{
"docid": "592b959fb3beef020e9dbafd804d897f",
"text": "In this paper, we study the effectiveness of phishing blacklists. We used 191 fresh phish that were less than 30 minutes old to conduct two tests on eight anti-phishing toolbars. We found that 63% of the phishing campaigns in our dataset lasted less than two hours. Blacklists were ineffective when protecting users initially, as most of them caught less than 20% of phish at hour zero. We also found that blacklists were updated at different speeds, and varied in coverage, as 47% 83% of phish appeared on blacklists 12 hours from the initial test. We found that two tools using heuristics to complement blacklists caught significantly more phish initially than those using only blacklists. However, it took a long time for phish detected by heuristics to appear on blacklists. Finally, we tested the toolbars on a set of 13,458 legitimate URLs for false positives, and did not find any instance of mislabeling for either blacklists or heuristics. We present these findings and discuss ways in which anti-phishing tools can be improved.",
"title": ""
},
{
"docid": "eddeeb5b00dc7f82291b3880956e2f01",
"text": "This study aims at building a robust method for semiautomated information extraction of pavement markings detected from mobile laser scanning (MLS) point clouds. The proposed workflow consists of three components: 1) preprocessing, 2) extraction, and 3) classification. In preprocessing, the three-dimensional (3-D) MLS point clouds are converted into radiometrically corrected and enhanced two-dimensional (2-D) intensity imagery of the road surface. Then, the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu's thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters by using a manually defined decision tree. A study was conducted by using the MLS dataset acquired in Xiamen, Fujian, China. The results demonstrated that the proposed workflow and method can achieve 92% in completeness, 95% in correctness, and 94% in F-score.",
"title": ""
},
{
"docid": "1386c523706fdd4535a8a75c33c4e615",
"text": "People have a basic need to maintain the integrity of the self, a global sense of personal adequacy. Events that threaten self-integrity arouse stress and self-protective defenses that can hamper performance and growth. However, an intervention known as self-affirmation can curb these negative outcomes. Self-affirmation interventions typically have people write about core personal values. The interventions bring about a more expansive view of the self and its resources, weakening the implications of a threat for personal integrity. Timely affirmations have been shown to improve education, health, and relationship outcomes, with benefits that sometimes persist for months and years. Like other interventions and experiences, self-affirmations can have lasting benefits when they touch off a cycle of adaptive potential, a positive feedback loop between the self-system and the social system that propagates adaptive outcomes over time. The present review highlights both connections with other disciplines and lessons for a social psychological understanding of intervention and change.",
"title": ""
},
{
"docid": "99ffc7cd601d1c43bbf7e3537632e95c",
"text": "Despite numerous advances in IT security, many computer users are still vulnerable to security-related risks because they do not comply with organizational policies and procedures. In a network setting, individual risk can extend to all networked users. Endpoint security refers to the set of organizational policies, procedures, and practices directed at securing the endpoint of the network connections – the individual end user. As such, the challenges facing IT managers in providing effective endpoint security are unique in that they often rely heavily on end user participation. But vulnerability can be minimized through modification of desktop security programs and increased vigilance on the part of the system administrator or CSO. The cost-prohibitive nature of these measures generally dictates targeting high-risk users on an individual basis. It is therefore important to differentiate between individuals who are most likely to pose a security risk and those who will likely follow most organizational policies and procedures.",
"title": ""
},
{
"docid": "45bb19cdb9508acf8796e9f43951571f",
"text": "The biomedical literature is expanding at ever-increasing rates, and it has become extremely challenging for researchers to keep abreast of new data and discoveries even in their own domains of expertise. We introduce PaperBot, a configurable, modular, open-source crawler to automatically find and efficiently index peer-reviewed publications based on periodic full-text searches across publisher web portals. PaperBot may operate stand-alone or it can be easily integrated with other software platforms and knowledge bases. Without user interactions, PaperBot retrieves and stores the bibliographic information (full reference, corresponding email contact, and full-text keyword hits) based on pre-set search logic from a wide range of sources including Elsevier, Wiley, Springer, PubMed/PubMedCentral, Nature, and Google Scholar. Although different publishing sites require different search configurations, the common interface of PaperBot unifies the process from the user perspective. Once saved, all information becomes web accessible allowing efficient triage of articles based on their actual relevance and seamless annotation of suitable metadata content. The platform allows the agile reconfiguration of all key details, such as the selection of search portals, keywords, and metadata dimensions. The tool also provides a one-click option for adding articles manually via digital object identifier or PubMed ID. The microservice architecture of PaperBot implements these capabilities as a loosely coupled collection of distinct modules devised to work separately, as a whole, or to be integrated with or replaced by additional software. All metadata is stored in a schema-less NoSQL database designed to scale efficiently in clusters by minimizing the impedance mismatch between relational model and in-memory data structures. As a testbed, we deployed PaperBot to help identify and manage peer-reviewed articles pertaining to digital reconstructions of neuronal morphology in support of the NeuroMorpho.Org data repository. PaperBot enabled the custom definition of both general and neuroscience-specific metadata dimensions, such as animal species, brain region, neuron type, and digital tracing system. Since deployment, PaperBot helped NeuroMorpho.Org more than quintuple the yearly volume of processed information while maintaining a stable personnel workforce.",
"title": ""
},
{
"docid": "73531bf62f19857e68e04e8b6470679e",
"text": "What will it take for drones—and the whole associated ecosystem—to take off? Arguably, infallible command and control (C&C) channels for safe and autonomous flying, and high-throughput links for multi-purpose live video streaming. And indeed, meeting these aspirations may entail a full cellular support, provided through 5G-and-beyond hardware and software upgrades by both mobile operators and manufacturers of these unmanned aerial vehicles (UAVs). In this article, we vouch for massive MIMO as the key building block to realize 5G-connected UAVs. Through the sheer evidence of 3GPPcompliant simulations, we demonstrate how massive MIMO can be enhanced by complementary network-based and UAVbased solutions, resulting in consistent UAV C&C support, large UAV uplink data rates, and harmonious coexistence with legacy ground users.",
"title": ""
},
{
"docid": "40cd4d0863ed757709530af59e928e3b",
"text": "Kynurenic acid (KYNA) is an endogenous antagonist of ionotropic glutamate receptors and the α7 nicotinic acetylcholine receptor, showing anticonvulsant and neuroprotective activity. In this study, the presence of KYNA in food and honeybee products was investigated. KYNA was found in all 37 tested samples of food and honeybee products. The highest concentration of KYNA was obtained from honeybee products’ samples, propolis (9.6 nmol/g), honey (1.0–4.8 nmol/g) and bee pollen (3.4 nmol/g). A high concentration was detected in fresh broccoli (2.2 nmol/g) and potato (0.7 nmol/g). Only traces of KYNA were found in some commercial baby products. KYNA administered intragastrically in rats was absorbed from the intestine into the blood stream and transported to the liver and to the kidney. In conclusion, we provide evidence that KYNA is a constituent of food and that it can be easily absorbed from the digestive system.",
"title": ""
},
{
"docid": "139d9d5866a1e455af954b2299bdbcf6",
"text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses",
"title": ""
},
{
"docid": "906c92a4e913d2b7e478155492a69013",
"text": "Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by <inline-formula><tex-math notation=\"LaTeX\">$7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq1-2876312.gif\"/></alternatives></inline-formula> over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a <inline-formula><tex-math notation=\"LaTeX\">$2.7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq2-2876312.gif\"/></alternatives></inline-formula> energy efficiency improvement of NTX over contemporary GPUs at <inline-formula><tex-math notation=\"LaTeX\">$4.4\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq3-2876312.gif\"/></alternatives></inline-formula> less silicon area, and a compute performance of 1.2 Tflop/s for training large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing <inline-formula><tex-math notation=\"LaTeX\">$2.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq4-2876312.gif\"/></alternatives></inline-formula> energy savings or <inline-formula><tex-math notation=\"LaTeX\">$3.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>3</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq5-2876312.gif\"/></alternatives></inline-formula> performance improvement over a GPU-based system.",
"title": ""
},
{
"docid": "38419655a4a8fedfd9e0c3001741f165",
"text": "Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.",
"title": ""
},
{
"docid": "1328ced6939005175d3fbe2ef95fd067",
"text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.",
"title": ""
},
{
"docid": "ae579fccab792401cbbd7b6225c17e1b",
"text": "The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs. Each job is to be processed by exactly one machine; processing job j on machine i requires time pif and incurs a cost of c,f, each machine / is available for 7\", t ime units, and the objective is.t»minimize the total cost incurred. Our main result is as follows. There is a polynomial-time algorithm that, given a value C, either proves that no feasible schedule of cost C exists, or else finds a schedule of cost at most C where each machine / is used for at most 27\", time units. We also extend this result to a variant of the problem where, instead of a fixed processing time p,r there is a range of possible processing times for each machine-job pair, and the cost linearly increases as (he processing time decreases. We show that these results imply a polynomial-time 2-approximation algorithm to minimize a weighted sum of the cost and the makespan, i.e., the maximum job completion time. We also consider the objective of minimizing the mean job completion time. We show that there is a polynomial-time algorithm that, given values M and 7\", either proves that no schedule of mean job completion time M and makespan /\"exists, or else finds a schedule of mean job completion time at most M and makespan at most 27\".",
"title": ""
}
] |
scidocsrr
|
2119d665534a15b04e49f996db25ac47
|
The contribution of attentional bias to worry: Distinguishing the roles of selective engagement and disengagement
|
[
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
}
] |
[
{
"docid": "42452d6df7372cdc9c2cdebd8f0475cb",
"text": "This paper presents SgxPectre Attacks that exploit the recently disclosed CPU bugs to subvert the confidentiality and integrity of SGX enclaves. Particularly, we show that when branch prediction of the enclave code can be influenced by programs outside the enclave, the control flow of the enclave program can be temporarily altered to execute instructions that lead to observable cache-state changes. An adversary observing such changes can learn secrets inside the enclave memory or its internal registers, thus completely defeating the confidentiality guarantee offered by SGX. To demonstrate the practicality of our SgxPectre Attacks, we have systematically explored the possible attack vectors of branch target injection, approaches to win the race condition during enclave’s speculative execution, and techniques to automatically search for code patterns required for launching the attacks. Our study suggests that any enclave program could be vulnerable to SgxPectre Attacks since the desired code patterns are available in most SGX runtimes (e.g., Intel SGX SDK, Rust-SGX, and Graphene-SGX). Most importantly, we have applied SgxPectre Attacks to steal seal keys and attestation keys from Intel signed quoting enclaves. The seal key can be used to decrypt sealed storage outside the enclaves and forge valid sealed data; the attestation key can be used to forge attestation signatures. For these reasons, SgxPectre Attacks practically defeat SGX’s security protection. This paper also systematically evaluates Intel’s existing countermeasures against SgxPectre Attacks and discusses the security implications.",
"title": ""
},
{
"docid": "7c2c987c2fc8ea0b18d8361072fa4e31",
"text": "Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated overengineering on IR, and not necessarily performance gain for QA. We propose to tightly integrate them by coupling automatically learned features for answer extraction to a shallow-structured IR model. Our method is very quick to implement, and significantly improves IR for QA (measured in Mean Average Precision and Mean Reciprocal Rank) by 10%-20% against an uncoupled retrieval baseline in both document and passage retrieval, which further leads to a downstream 20% improvement in QA F1.",
"title": ""
},
{
"docid": "57261e77a6e8f6a0c984f5e199a71554",
"text": "We present a software framework for simulating the HCF Controlled Channel Access (HCCA) in an IEEE 802.11e system. The proposed approach allows for flexible integration of different scheduling algorithms with the MAC. The 802.11e system consists of three modules: Classifier, HCCA Scheduler, MAC. We define a communication interface exported by the MAC module to the HCCA Scheduler. A Scheduler module implementing the reference scheduler defined in the draft IEEE 802.11e document is also described. The software framework reported in this paper has been implemented using the Network Simulator 2 platform. A preliminary performance analysis of the reference scheduler is also reported.",
"title": ""
},
{
"docid": "fba0ff24acbe07e1204b5fe4c492ab72",
"text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.",
"title": ""
},
{
"docid": "a7c2c2889b54a4f0e22b1cb09bbd8d6b",
"text": "In this paper we present an efficient algorithm for multi-layer depth peeling via bucket sort of fragments on GPU, which makes it possible to capture up to 32 layers simultaneously with correct depth ordering in a single geometry pass. We exploit multiple render targets (MRT) as storage and construct a bucket array of size 32 per pixel. Each bucket is capable of holding only one fragment, and can be concurrently updated using the MAX/MIN blending operation. During the rasterization, the depth range of each pixel location is divided into consecutive subintervals uniformly, and a linear bucket sort is performed so that fragments within each subintervals will be routed into the corresponding buckets. In a following fullscreen shader pass, the bucket array can be sequentially accessed to get the sorted fragments for further applications. Collisions will happen when more than one fragment is routed to the same bucket, which can be alleviated by multi-pass approach. We also develop a two-pass approach to further reduce the collisions, namely adaptive bucket depth peeling. In the first geometry pass, the depth range is redivided into non-uniform subintervals according to the depth distribution to make sure that there is only one fragment within each subinterval. In the following bucket sorting pass, there will be only one fragment routed into each bucket and collisions will be substantially reduced. Our algorithm shows up to 32 times speedup to the classical depth peeling especially for large scenes with high depth complexity, and the experimental results are visually faithful to the ground truth. Also it has no requirement of pre-sorting geometries or post-sorting fragments, and is free of read-modify-write (RMW) hazards.",
"title": ""
},
{
"docid": "af7736d4e796d3439613ed06ca4e4b72",
"text": "The past few years have witnessed the fast development of different regularization methods for deep learning models such as fully-connected deep neural networks (DNNs) and Convolutional Neural Networks (CNNs). Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks. DropConnect select to drop connections between fully-connected layers. By randomly discard some features or connections, the above mentioned methods control the overfitting problem and improve the performance of neural networks. In this paper, we proposed two novel regularization methods, namely DropFilter and DropFilter-PLUS, for the learning of CNNs. Different from the previous methods, DropFilter and DropFilter-PLUS selects to modify the convolution filters. For DropFilter-PLUS, we find a suitable way to accelerate the learning process based on theoretical analysis. Experimental results on MNISTshow that using DropFilter and DropFilter-PLUS may improve performance on image classification tasks.",
"title": ""
},
{
"docid": "3ea9d312027505fb338a1119ff01d951",
"text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.",
"title": ""
},
{
"docid": "7709df997c72026406d257c85dacb271",
"text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.",
"title": ""
},
{
"docid": "7d02f07418dc82b0645b6933a3fecfc0",
"text": "This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health Informatics: How Do We Know What We Know?\" written by Elske Ammenwerth [1]. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the Ammenwerth paper. In subsequent issues the discussion can continue through letters to the editor. With these comments on the paper \"Evidence-based Health Informatics: How do we know what we know?\", written by Elske Ammenwerth [1], the journal seeks to stimulate a broad discussion on the challenges of evaluating information processing and information technology in health care. An international group of experts has been invited by the editor of Methods to comment on this paper. Each of the invited commentaries forms one section of this paper.",
"title": ""
},
{
"docid": "498eada57edb9120da164c5cb396198b",
"text": "We propose a passive blackbox-based technique for determining the type of access point (AP) connected to a network. Essentially, a stimulant (i.e., packet train) that emulates normal data transmission is sent through the access point. Since access points from different vendors are architecturally heterogeneous (e.g., chipset, firmware, driver), each AP will act upon the packet train differently. By applying wavelet analysis to the resultant packet train, a distinct but reproducible pattern is extracted allowing a clear classification of different AP types. This has two important applications: (1) as a system administrator, this technique can be used to determine if a rogue access point has connected to the network; and (2) as an attacker, fingerprinting the access point is necessary to launch driver/firmware specific attacks. Extensive experiments were conducted (over 60GB of data was collected) to differentiate 6 APs. We show that this technique can classify APs with a high accuracy (in some cases, we can classify successfully 100% of the time) with as little as 100000 packets. Further, we illustrate that this technique is independent of the stimulant traffic type (e.g., TCP or UDP). Finally, we show that the AP profile is stable across multiple models of the same AP.",
"title": ""
},
{
"docid": "dd1f8a5eae50d0a026387ba1b6695bef",
"text": "Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discussed.",
"title": ""
},
{
"docid": "013c6f8931a8f9e0cff4fb291571e5bf",
"text": "Herrmann-Pillath, Carsten, Libman, Alexander, and Yu, Xiaofan—Economic integration in China: Politics and culture The aim of the paper is to explicitly disentangle the role of political and cultural boundaries as factors of fragmentation of economies within large countries. On the one hand, local protectionism plays a substantial role in many federations and decentralized states. On the other hand, if the country exhibits high level of cultural heterogeneity, it may also contribute to the economic fragmentation; however, this topic has received significantly less attention in the literature. This paper looks at the case of China and proxies the cultural heterogeneity by the heterogeneity of local dialects. It shows that the effect of politics clearly dominates that of culture: while provincial borders seem to have a strong influence disrupting economic ties, economic linkages across provinces, even if the regions fall into the same linguistic zone, are rather weak and, on the contrary, linguistic differences within provinces do not prevent economic integration. For some language zones we do, however, find a stronger effect on economic integration. Journal of Comparative Economics 42 (2) (2014) 470–492. Frankfurt School of Finance and Management, Germany; Russian Academy of Sciences, Russia. 2013 Association for Comparative Economic Studies Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1eea111c3efcc67fcc1bb6f358622475",
"text": "Methyl Cellosolve (the monomethyl ether of ethylene glycol) has been widely used as the organic solvent in ninhydrin reagents for amino acid analysis; it has, however, properties that are disadvantageous in a reagent for everyday employment. The solvent is toxic and it is difficult to keep the ether peroxide-free. A continuing effort to arrive at a chemically preferable and relatively nontoxic substitute for methyl Cellosolve has led to experiments with dimethyl s&oxide, which proves to be a better solvent for the reduced form of ninhydrin (hydrindantin) than is methyl Cellosolve. Dimethyl sulfoxide can replace the latter, volume for volume, in a ninhydrin reagent mixture that gives equal performance and has improved stability. The result is a ninhydrin-hydrindantin solution in 75% dimethyl sulfoxide25 % 4 M lithium acetate buffer at pH 5.2. This type of mixture, with appropriate hydrindantin concentrations, is recommended to replace methyl Cellosolve-containing reagents in the quantitative determination of amino acids by automatic analyzers and by the manual ninhydrin method.",
"title": ""
},
{
"docid": "11828571b57966958bd364947f41ad40",
"text": "A smart city is developed, deployed and maintained with the help of Internet of Things (IoT). The smart cities have become an emerging phenomena with rapid urban growth and boost in the field of information technology. However, the function and operation of a smart city is subject to the pivotal development of security architectures. The contribution made in this paper is twofold. Firstly, it aims to provide a detailed, categorized and comprehensive overview of the research on security problems and their existing solutions for smart cities. The categorization is based on several factors such as governance, socioeconomic and technological factors. This classification provides an easy and concise view of the security threats, vulnerabilities and available solutions for the respective technologies areas that are proposed over the period 2010-2015. Secondly, an IoT testbed for smart cities architecture, i.e., SmartSantander is also analyzed with respect to security threats and vulnerabilities to smart cities. The existing best practices regarding smart city security are discussed and analyzed with respect to their performance, which could be used by different stakeholders of the smart cities.",
"title": ""
},
{
"docid": "02c8093183af96808a71b93ee3103996",
"text": "The medical field stands to see significant benefits from the recent advances in deep learning. Knowing the uncertainty in the decision made by any machine learning algorithm is of utmost importance for medical practitioners. This study demonstrates the utility of using Bayesian LSTMs for classification of medical time series. Four medical time series datasets are used to show the accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we show cherry-picked examples of confident and uncertain classifications of the medical time series. With simple modifications of the common practice for deep learning, significant improvements can be made for the medical practitioner and patient.",
"title": ""
},
{
"docid": "812687a5291d786ecda102adda03700c",
"text": "The overall goal is to show that conceptual spaces are more promising than other ways of modelling the semantics of natural language. In particular, I will show how they can be used to model actions and events. I will also outline how conceptual spaces provide a cognitive grounding for word classes, including nouns, adjectives, prepositions and verbs.",
"title": ""
},
{
"docid": "e573d85271e3f3cc54b774de8a5c6dd9",
"text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.",
"title": ""
},
{
"docid": "e87c93e13f94191450216e308215ff38",
"text": "Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however a base station has only a limited knowledge of the arrival processes of uplink flows.In this paper, we propose a new model for wireless fair scheduling based on an adaptation of fluid fair queueing to handle location-dependent error bursts. We describe an ideal wireless fair scheduling algorithm which provides a packetized implementation of the fluid model while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless fluid fair queueing model.",
"title": ""
},
{
"docid": "bb01b5e24d7472ab52079dcb8a65358d",
"text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.",
"title": ""
}
] |
scidocsrr
|
41e5cf3b4e9ff8becbcee94599f49029
|
Adaptive assist-as-needed controller to improve gait symmetry in robot-assisted gait training
|
[
{
"docid": "ec230707da4dc2085863fffb990e5259",
"text": "We propose a novel method for movement assistance that is based on adaptive oscillators, i.e., mathematical tools that are capable of extracting the high-level features (amplitude, frequency, and offset) of a periodic signal. Such an oscillator acts like a filter on these features, but keeps its output in phase with respect to the input signal. Using a simple inverse model, we predicted the torque produced by human participants during rhythmic flexion extension of the elbow. Feeding back a fraction of this estimated torque to the participant through an elbow exoskeleton, we were able to prove the assistance efficiency through a marked decrease of the biceps and triceps electromyography. Importantly, since the oscillator adapted to the movement imposed by the user, the method flexibly allowed us to change the movement pattern and was still efficient during the nonstationary epochs. This method holds promise for the development of new robot-assisted rehabilitation protocols because it does not require prespecifying a reference trajectory and does not require complex signal sensing or single-user calibration: the only signal that is measured is the position of the augmented joint. In this paper, we further demonstrate that this assistance was very intuitive for the participants who adapted almost instantaneously.",
"title": ""
},
{
"docid": "d42f5fdbcaf8933dc97b377a801ef3e0",
"text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.",
"title": ""
}
] |
[
{
"docid": "499ad54ed1b02115fd42d2f0972c7abb",
"text": "This paper has been commissioned by the World Bank Group for the \"Social Dimensions of Climate Change\" workshop. Views represented are those of the authors, and do not represent an official position of the World Bank Group or those of the Executive Directors of the World Bank or the overnments they represent. The World Bank does not guarantee the accuracy of data presented in g this paper. *This paper was written under contract 7145451 between PRIO and the World Bank for the program on 'Exploring the Social Dimensions of Climate Change'. The opinions expressed in this document represent the views of the authors and do not necessarily state or reflect those of the World Bank. Bibliography 41 iii Executive Summary Climate change is expected to bring about significant changes in migration patterns throughout the developing world. Increases in the frequency and severity of chronic environmental hazards and sudden onset disasters are projected to alter the typical migration patterns of communities and entire countries. We examine evidence for such claims and roundly conclude that large scale community relocation due to either chronic or sudden onset hazards is and continues to be an unlikely response. We propose an alternate framework through which to examine the likely consequences of increased hazards. It is built upon the five major conclusions of this paper: First, disasters vary considerably in their potential to instigate migration. Moreover, individual, community and national vulnerabilities shape responses as much as disaster effects do. Focussing on how people are vulnerable as a function of political, economic and social forces leads to an in-depth understanding of post-disaster human security. Second, individuals and communities in the developing world incorporate environmental risk into their livelihoods. Their ability to do so effectively is contingent upon their available assets. Diversifying income streams is the predominant avenue through which people mitigate increased hazards from climate changes. Labour migration to rural and urban areas is a common component of diversified local economies. In lesser developed countries, labour migration is typically internal, temporary and circular. Third, during periods of chronic environmental degradation, such as increased soil salinization or land degradation, the most common responses by individuals and communities is to intensify labour migration patterns. By doing so, families increase remittances and lessen immediate burdens to provide. Fourth, with the onset of a sudden disaster or the continued presence of a chronic disaster (i.e. drought or famine), communities engage in …",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "7caf6388da49eafe48ce70b205c6223d",
"text": "Competitive Computer Games, such as StarCraft II, remain a largely unexplored and active application of Machine Learning, Artificial Intelligence, and Computer Vision. These games are highly complex as they typically 1) involve incomplete information, 2) include multiple strategies and elements that usually happen concurrently, and 3) run in real-time. For this project, we dive into a minigame for StarCraft II that involves many engagement skills such as focus fire, splitting, and kiting to win battles. This paper goes into the details of implementing an algorithm using behavioral cloning, a subset of imitation learning, to tackle the problem. Human expert replay data is used to train different systems that are evaluated on the minigame. Supervised learning, Convolutional Neural Networks, and Combined Loss Functions are all used in this project. While we have created an agent that shows some basic understanding of the game, the strategies performed are rather primitive. Nevertheless, this project establishes a useful framework that can be used for future expansion. (This project was completed in tandem with a related CS221 project.)",
"title": ""
},
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "b3dcbd8a41e42ae6e748b07c18dbe511",
"text": "There is inconclusive evidence whether practicing tasks with computer agents improves people’s performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and threeplayer coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased people’s performance as compared to interacting with people. In the three player coordination game, training with computer agents increased people’s performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving people’s skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.",
"title": ""
},
{
"docid": "956cf3bf67aa60391b7c96162a5013bd",
"text": "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.",
"title": ""
},
{
"docid": "903a5b7fb82d3d46b02e720b2db9c982",
"text": "A heuristic recursive algorithm for the two-dimensional rectangular strip packing problem is presented. It is based on a recursive structure combined with branch-and-bound techniques. Several lengths are tried to determine the minimal plate length to hold all the items. Initially the plate is taken as a block. For the current block considered, the algorithm selects an item, puts it at the bottom-left corner of the block, and divides the unoccupied region into two smaller blocks with an orthogonal cut. The dividing cut is vertical if the block width is equal to the plate width; otherwise it is horizontal. Both lower and upper bounds are used to prune unpromising branches. The computational results on a class of benchmark problems indicate that the algorithm performs better than several recently published algorithms. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2c04fd272c90a8c0a74a16980fcb5b03",
"text": "We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.",
"title": ""
},
{
"docid": "3bebd1c272b1cba24f6aeeabaa5c54d2",
"text": "Cloacal anomalies occur when failure of the urogenital septum to separate the cloacal membrane results in the urethra, vagina, rectum and anus opening into a single common channel. The reported incidence is 1:50,000 live births. Short-term paediatric outcomes of surgery are well reported and survival into adulthood is now usual, but long-term outcome data are less comprehensive. Chronic renal failure is reported to occur in 50 % of patients with cloacal anomalies, and 26–72 % (dependant on the length of the common channel) of patients experience urinary incontinence in adult life. Defaecation is normal in 53 % of patients, with some managed by methods other than surgery, including medication, washouts, stoma and antegrade continent enema. Gynaecological anomalies are common and can necessitate reconstructive surgery at adolescence for menstrual obstruction. No data are currently available on sexual function and little on the quality of life. Pregnancy is extremely rare and highly risky. Patient care should be provided by a multidisciplinary team with experience in managing these and other related complex congenital malformations. However, there is an urgent need for a well-planned, collaborative multicentre prospective study on the urological, gastrointestinal and gynaecological aspects of this rare group of complex conditions.",
"title": ""
},
{
"docid": "c3325bcfa1b1a9c9012c50fe0bd11161",
"text": "We consider the problem of identifying authoritative users in Yahoo! Answers. A common approach is to use link analysis techniques in order to provide a ranked list of users based on their degree of authority. A major problem for such an approach is determining how many users should be chosen as authoritative from a ranked list. To address this problem, we propose a method for automatic identification of authoritative actors. In our approach, we propose to model the authority scores of users as a mixture of gamma distributions. The number of components in the mixture is estimated by the Bayesian Information Criterion (BIC) while the parameters of each component are estimated using the Expectation-Maximization (EM) algorithm. This method allows us to automatically discriminate between authoritative and non-authoritative users. The suitability of our proposal is demonstrated in an empirical study using datasets from Yahoo! Answers.",
"title": ""
},
{
"docid": "a57bdfa9c48a76d704258f96874ea700",
"text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.",
"title": ""
},
{
"docid": "e66ae650db7c4c75a88ee6cf1ea8694d",
"text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.",
"title": ""
},
{
"docid": "4c64b652d9135dae74de4f167c61e896",
"text": "An important task in computational statistics and machine learning is to approximate a posterior distribution p(x) with an empirical measure supported on a set of representative points {xi}i=1. This paper focuses on methods where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when n is small. To this end, we present Stein Points. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and p(x). Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results are provided to establish convergence of the method.",
"title": ""
},
{
"docid": "cea53ea6ff16808a2dbc8680d3ef88ee",
"text": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.",
"title": ""
},
{
"docid": "314e10ba42a13a84b40a1b0367bd556e",
"text": "How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional \"tone\" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.",
"title": ""
},
{
"docid": "ba695228c0fbaf91d6db972022095e98",
"text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press",
"title": ""
},
{
"docid": "3a897419e218dc20e71a596cbe4c9c58",
"text": "This paper is the first of a two-part series analyzing human grasping behavior during a wide range of unstructured tasks. The results help clarify overall characteristics of human hand to inform many domains, such as the design of robotic manipulators, targeting rehabilitation toward important hand functionality, and designing haptic devices for use by the hand. It investigates the properties of objects grasped by two housekeepers and two machinists during the course of almost 10,000 grasp instances and correlates the grasp types used to the properties of the object. We establish an object classification that assigns each object properties from a set of seven classes, including mass, shape and size of the grasp location, grasped dimension, rigidity, and roundness. The results showed that 55 percent of grasped objects had at least one dimension larger than 15 cm, suggesting that more than half of objects cannot physically be grasped using their largest axis. Ninety-two percent of objects had a mass of 500 g or less, implying that a high payload capacity may be unnecessary to accomplish a large subset of human grasping behavior. In terms of grasps, 96 percent of grasp locations were 7 cm or less in width, which can help to define requirements for hand rehabilitation and defines a reasonable grasp aperture size for a robotic hand. Subjects grasped the smallest overall major dimension of the object in 94 percent of the instances. This suggests that grasping the smallest axis of an object could be a reliable default behavior to implement in grasp planners.",
"title": ""
},
{
"docid": "224ec7b58d17f4ffb9753ac85bf29456",
"text": "This paper presents Venus, a service for securing user interaction with untrusted cloud storage. Specifically, Venus guarantees integrity and consistency for applications accessing a key-based object store service, without requiring trusted components or changes to the storage provider. Venus completes all operations optimistically, guaranteeing data integrity. It then verifies operation consistency and notifies the application. Whenever either integrity or consistency is violated, Venus alerts the application. We implemented Venus and evaluated it with Amazon S3 commodity storage service. The evaluation shows that it adds no noticeable overhead to storage operations.",
"title": ""
},
{
"docid": "94a6d693d3b3b9273335ef35a61d9f2f",
"text": "Twitter is one of the most popular social platforms for online users to share trendy information and views on any event. Twitter reports an event faster than any other medium and contains enormous information and views regarding an event. Consequently, Twitter topic summarization is one of the most convenient ways to get instant gist of any event. However, the information shared on Twitter is often full of nonstandard abbreviations, acronyms, out of vocabulary (OOV) words and with grammatical mistakes which create challenges to find reliable and useful information related to any event. Undoubtedly, Twitter event summarization is a challenging task where traditional text summarization methods do not work well. In last decade, various research works introduced different approaches for automatic Twitter topic summarization. The main aim of this survey work is to make a broad overview of promising summarization approaches on a Twitter topic. We also focus on automatic evaluation of summarization techniques by surveying recent evaluation methodologies. At the end of the survey, we emphasize on both current and future research challenges in this domain through a level of depth analysis of the most recent summarization approaches.",
"title": ""
}
] |
scidocsrr
|
f11c3d4c30f5c3fc47836c033ce8ea87
|
Reconfigurable circularly polarized antenna for short-range communication systems
|
[
{
"docid": "dcafec84cfcfad2c9c679e43eb87949a",
"text": "A novel design of a microstrip patch antenna with switchable slots (PASS) is proposed to achieve circular polarization diversity. Two orthogonal slots are incorporated into the patch and two pin diodes are utilized to switch the slots on and off. By turning the diodes on or off, this antenna can radiate with either right hand circular polarization (RHCP) or left hand circular polarization (LHCP) using the same feeding probe. Experimental results validate this concept. This design demonstrates useful features for wireless communication applications and future planetary missions.",
"title": ""
}
] |
[
{
"docid": "88f43c85c32254a5c2859e983adf1c43",
"text": "This study observed naturally occurring emergent leadership behavior in distributed virtual teams. The goal of the study was to understand how leadership behaviors emerge and are distributed in these kinds of teams. Archived team interaction captured during the course of a virtual collaboration exercise was analyzed using an a priori content analytic scheme derived from behaviorally-based leadership theory to capture behavior associated with leadership in virtual environments. The findings lend support to the notion that behaviorally-based leadership theory can provide insights into emergent leadership in virtual environments. This study also provides additional insights into the patterns of leadership that emerge in virtual environments and relationship to leadership behaviors.",
"title": ""
},
{
"docid": "46ddd7d456553927f8522802f7fb4cc2",
"text": "An effective supplier selection process is very important to the success of any manufacturing organization. The main objective of supplier selection process is to reduce purchase risk, maximize overall value to the purchaser, and develop closeness and long-term relationships between buyers and suppliers in today’s competitive industrial scenario. The literature on supplier selection criteria and methods is full of various analytical and heuristic approaches. Some researchers have developed hybrid models by combining more than one type of selection methods. It is felt that supplier selection criteria and method is still a critical issue for the manufacturing industries therefore in the present paper the literature has been thoroughly reviewed and critically analyzed to address the issue. Keywords—Supplier selection, AHP, ANP, TOPSIS, Mathematical Programming.",
"title": ""
},
{
"docid": "4d964a5cfd5b21c6196a31f4b204361d",
"text": "Edge detection is a fundamental tool in the field of image processing. Edge indicates sudden change in the intensity level of image pixels. By detecting edges in the image, one can preserve its features and eliminate useless information. In the recent years, especially in the field of Computer Vision, edge detection has been emerged out as a key technique for image processing. There are various gradient based edge detection algorithms such as Robert, Prewitt, Sobel, Canny which can be used for this purpose. This paper reviews all these gradient based edge detection techniques and provides comparative analysis. MATLAB/Simulink is used as a simulation tool. System is designed by configuring ISE Design suit with MATLAB. Hardware Description Language (HDL) is generated using Xilinx System Generator. HDL code is synthesized and implemented using Field Programmable Gate Array (FPGA).",
"title": ""
},
{
"docid": "512c0d3d9ad6d6a4d139a5e7e0bd3a4e",
"text": "The epidermal growth factor receptor (EGFR) contributes to the pathogenesis of head&neck squamous cell carcinoma (HNSCC). However, only a subset of HNSCC patients benefit from anti-EGFR targeted therapy. By performing an unbiased proteomics screen, we found that the calcium-activated chloride channel ANO1 interacts with EGFR and facilitates EGFR-signaling in HNSCC. Using structural mutants of EGFR and ANO1 we identified the trans/juxtamembrane domain of EGFR to be critical for the interaction with ANO1. Our results show that ANO1 and EGFR form a functional complex that jointly regulates HNSCC cell proliferation. Expression of ANO1 affected EGFR stability, while EGFR-signaling elevated ANO1 protein levels, establishing a functional and regulatory link between ANO1 and EGFR. Co-inhibition of EGFR and ANO1 had an additive effect on HNSCC cell proliferation, suggesting that co-targeting of ANO1 and EGFR could enhance the clinical potential of EGFR-targeted therapy in HNSCC and might circumvent the development of resistance to single agent therapy. HNSCC cell lines with amplification and high expression of ANO1 showed enhanced sensitivity to Gefitinib, suggesting ANO1 overexpression as a predictive marker for the response to EGFR-targeting agents in HNSCC therapy. Taken together, our results introduce ANO1 as a promising target and/or biomarker for EGFR-directed therapy in HNSCC.",
"title": ""
},
{
"docid": "8b51bcd5d36d9e15419d09b5fc8995b5",
"text": "In this technical report, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and divergence. We support our claim with an analytical study of the Observability Gramian, along with its right nullspace, which constitutes the basis of the unobservable directions of the system. We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experimentation.",
"title": ""
},
{
"docid": "3fd747a983ef1a0e5eff117b8765d4b3",
"text": "We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. A hierarchical clustering analysis based on the distributions of centrality has a certain capacity to distinguish different classes of cities. In particular, self-organized cities exhibit scale-free properties similar to those found in nonspatial networks, while planned cities do not.",
"title": ""
},
{
"docid": "e17558c5a39f3e231aa6d09c8e2124fc",
"text": "Surveys of child sexual abuse in large nonclinical populations of adults have been conducted in at least 19 countries in addition to the United States and Canada, including 10 national probability samples. All studies have found rates in line with comparable North American research, ranging from 7% to 36% for women and 3% to 29% for men. Most studies found females to be abused at 1 1/2 to 3 times the rate for males. Few comparisons among countries are possible because of methodological and definitional differences. However, they clearly confirm sexual abuse to be an international problem.",
"title": ""
},
{
"docid": "f27ad6bf5c65fdea1a98b118b1a43c85",
"text": "Localization is one of the problems that often appears in the world of robotics. Monte Carlo Localization (MCL) are the one of the popular algorithms in localization because easy to implement on issues Global Localization. This algorithm using particles to represent the robot position. MCL can simulated by Robot Operating System (ROS) using robot type is Pioneer3-dx. In this paper we will discuss about this algorithm on ROS, by analyzing the influence of the number particle that are used for localization of the actual robot position.",
"title": ""
},
{
"docid": "8edc51b371d7551f9f7e69149cd4ece0",
"text": "Though many previous studies has proved the importance of trust from various perspectives, the researches about online consumer’s trust are fragmented in nature and still it need more attention from academics. Lack of consumers trust in online systems is a critical impediment to the success of e-Commerce. Therefore it is important to explore the critical factors that affect the formation of user’s trust in online environments. The main objective of this paper is to analyze the effects of various antecedents of online trust and to predict the user’s intention to engage in online transaction based on their trust in the Information systems. This study is conducted among Asian online consumers and later the results were compared with those from Non-Asian regions. Another objective of this paper is to integrate De Lone and McLean model of IS Success and Technology Acceptance Model (TAM) for measuring the significance of online trust in e-Commerce adoption. The results of this study show that perceived security, perceived privacy, vendor familiarity, system quality and service quality are the significant antecedents of online trust in a B2C e-Commerce context.",
"title": ""
},
{
"docid": "52796981853b05fb29dcfd223a732866",
"text": "OBJECTIVE\nTo investigate whether intrapericardial urokinase irrigation along with pericardiocentesis could prevent pericardial constriction in patients with infectious exudative pericarditis.\n\n\nMETHODS\nA total of 94 patients diagnosed as infectious exudative pericarditis (34 patients with purulent pericarditis and 60 with tuberculous pericarditis, the disease courses of all patients were less than 1 month), 44 males and 50 females, aged from 9 to 66 years (mean 45.4 +/- 14.7 years), were consecutively recruited from 1993 to 2002. All individuals were randomly given either intrapericardial urokinase along with conventional treatment in study group, or conventional treatment alone (including pericardiocentesis and drainage) in control group. The dosage of urokinase ranged from 200000 to 600000 U (mean 320000 +/- 70000 U). The immediate effects were detected by pericardiography with sterilized air and diatrizoate meglumine as contrast media. The long-term investigation depended on the telephonic survey and echocardiographic examination. The duration of following-up ranged from 8 to 120 months (mean 56.8 +/- 29.0 months).\n\n\nRESULTS\nPercutaneous intrapericardial urokinase irrigation promoted complete drainage of pericardial effusion, significantly reduced the thickness of pericardium (from 3.1 +/- 1.6 mm to 1.6 +/- 1.0 mm in study group, P < 0.001; from 3.4 +/- 1.6 mm to 3.2 +/- 1.8 mm in control group, P > 0.05, respectively), and alleviated the adhesion. Intrapericardial bleeding related to fibrinolysis was found in 6 of 47 patients with non-blood pericardial effusion and no systemic bleeding and severe puncture-related complication was observed. In follow-up, there was no cardiac death, and pericardial constriction events were observed in 9 (19.1%) of study group and 27 (57.4%) of control group. Cox analysis illustrated that urokinase could significantly reduce the occurrence of pericardial constriction (relative hazard coefficient = 0.185, P < 0.0001).\n\n\nCONCLUSION\nThe early employment of intrapericardial fibrinolysis with urokinase and pericardiocentesis appears to be safe and effective in preventing the development of pericardial constriction in patients with infectious exudative pericarditis.",
"title": ""
},
{
"docid": "f833db8a1e61634f1ff20be721bd7c64",
"text": "Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values obtained from the proximal operator can be automatically threshold. This allows the proximal operator to be efficiently approximated by the power method. We then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. It can be guaranteed that the squared distance between consecutive iterates converges at a rate of , where is the number of iterations. Furthermore, we show the proposed algorithm can be parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. Extensive experiments are performed on matrix completion and robust principal component analysis. Significant speedup over the state-of-the-art is observed.",
"title": ""
},
{
"docid": "42d3adba03f835f120404cfe7571a532",
"text": "This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.",
"title": ""
},
{
"docid": "149073f577d0e1fb380ae395ff1ca0c5",
"text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.",
"title": ""
},
{
"docid": "3db1c2e951f464238b887b4ceda470a4",
"text": "Assuming that migration threat is multi-dimensional, this article seeks to investigate how various types of threats associated with immigration affect attitudes towards immigration and civil liberties. Through experimentation, the study unpacks the ‘securitization of migration’ discourse by disaggregating the nature of immigration threat, and its impact on policy positions and ideological patterns at the individual level. Based on framing and attitudinal analysis, we argue that physical security in distinction from cultural insecurity is enough to generate important ideological variations stemming from strategic input (such as framing and issue-linkage). We expect then that as immigration shifts from a cultural to a physical threat, immigration issues may become more politically salient but less politicized and subject to consensus. Interestingly, however, the findings reveal that the effects of threat framing are not ubiquitous, and may be conditional upon ideology. Liberals were much more susceptible to the frames than were conservatives. Potential explanations for the ideological effects of framing, as well as their implications, are explored.",
"title": ""
},
{
"docid": "0dc0565b364defdd1c23c4367a4bb87e",
"text": "A procedure involving reverse transcription followed by the polymerase chain reaction (RT-PCR) using a single primer pair was developed for the detection of five tobamovirus species which are related serologically. Either with a subsequent restriction enzyme analysis (RT-PCR-RFLP) or with a RT-PCR using species specific primers the five species can be differentiated. To differentiate those species by serological means is time consuming and might give ambiguous results. With the example of the isolate OHIO V, which is known to break the resistance in a selection of Lycopersicon peruvianum, the suitability of the RT-PCR-RFLP technique to detect variability at the species level was shown. In sequence analysis 47 codons of the coat protein gene of this isolate were found to be mutated compared to a tobacco mosaic virus (TMV) coat protein gene sequence. Forty of these mutations were silent and did not change the amino acid sequence. Both procedures are suitable to detect mixed infections. In addition, the RT-PCR-RFLP give information on the relative amounts of the viruses that are present in a doubly infected plant. The RT-PCR-RFLP using general primers as well as the RT-PCR using species specific primers were proven to be useful for the diagnosis and control of the disease and will be helpful for resistance breeding, epidemiological investigations and plant virus collections.",
"title": ""
},
{
"docid": "56e520f27f7411979e901318c5979fcf",
"text": "With the development of intelligent device and social media, the data bulk on Internet has grown with high speed. As an important aspect of image processing, object detection has become one of the international popular research fields. In recent years, the powerful ability with feature learning and transfer learning of Convolutional Neural Network (CNN) has received growing interest within the computer vision community, thus making a series of important breakthroughs in object detection. So it is a significant survey that how to apply CNN to object detection for better performance. First the paper introduced the basic concept and architecture of CNN. Secondly the methods that how to solve the existing problems of conventional object detection are surveyed, mainly analyzing the detection algorithm based on region proposal and based on regression. Thirdly it mentioned some means which improve the performance of object detection. Then the paper introduced some public datasets of object detection and the concept of evaluation criterion. Finally, it combed the current research achievements and thoughts of object detection, summarizing the important progress and discussing the future directions.",
"title": ""
},
{
"docid": "a526cf2212f8233be7c8e20c9619ec31",
"text": "Patients with rheumatoid arthritis can be divided into two major subsets characterized by the presence versus absence of antibodies to citrullinated protein antigens (ACPAs) and of rheumatoid factor (RF). The antibody-positive subset of disease, also known as seropositive rheumatoid arthritis, constitutes approximately two-thirds of all cases of rheumatoid arthritis and generally has a more severe disease course. ACPAs and RF are often present in the blood long before any signs of joint inflammation, which suggests that the triggering of autoimmunity may occur at sites other than the joints (for example, in the lung). This Review summarizes recent progress in our understanding of this gradual disease development in seropositive patients. We also emphasize the implications of this new understanding for the development of preventive and therapeutic strategies. Similar temporal and spatial separation of immune triggering and clinical manifestations, with novel opportunities for early intervention, may also occur in other immune-mediated diseases.",
"title": ""
},
{
"docid": "933623750ec9ebbbb79a5fea3b03fae1",
"text": "It is natural to ask if one can perform a computational task considerably faster by using a different architecture (i.e., a different computational model). The answer to this question is a resounding yes. A cute example is the Macaroni sort. We are given a set S = {s 1 ,. .. , S n } of n real numbers in the range (say) [1, 2]. We get a lot of Macaroni (this are longish and very narrow tubes of pasta), and cut the ith piece to be of length s i , for i = 1,. .. , n. Next, take all these pieces of pasta in your hand, make them stand up vertically, with their bottom end lying on a horizontal surface. Next, lower your handle till it hit the first (i.e., tallest) piece of pasta. Take it out, measure it height, write down its number, and continue in this fashion till you have extracted all the pieces of pasta. Clearly, this is a sorting algorithm that works in linear time. But we know that sorting takes Ω(n log n) time. Thus, this algorithm is much faster than the standard sorting algorithms. This faster algorithm was achieved by changing the computation model. We allowed new \" strange \" operations (cutting a piece of pasta into a certain length, picking the longest one in constant time, and measuring the length of a pasta piece in constant time). Using these operations we can sort in linear time. If this was all we can do with this approach, that would have only been a curiosity. However, interestingly enough, there are natural computation models which are considerably stronger than the standard model of computation. Indeed, consider the task of computing the output of the circuit on the right (here, the input is boolean values on the input wires on the left, and the output is the single output on the right). Clearly, this can be solved by ordering the gates in the \" right \" order (this can be done by topological sorting), and then computing the value of the gates one by one in this order, in such a x This work is licensed under the Creative Commons Attribution-Noncommercial 3.0 License. To view a copy of this license, visit",
"title": ""
},
{
"docid": "8a7ea746acbfd004d03d4918953d283a",
"text": "Sentiment analysis is an important current research area. This paper combines rule-based classification, supervised learning andmachine learning into a new combinedmethod. Thismethod is tested onmovie reviews, product reviews and MySpace comments. The results show that a hybrid classification can improve the classification effectiveness in terms of microand macro-averaged F1. F1 is a measure that takes both the precision and recall of a classifier’s effectiveness into account. In addition, we propose a semi-automatic, complementary approach in which each classifier can contribute to other classifiers to achieve a good level of effectiveness.",
"title": ""
}
] |
scidocsrr
|
8c1163ec955b50f7e1e0b02c08b57b5c
|
Building an Argument Search Engine for the Web
|
[
{
"docid": "fdc01b87195272f8dec8ed32dfe8e664",
"text": "Future search engines are expected to deliver pro and con arguments in response to queries on controversial topics. While argument mining is now in the focus of research, the question of how to retrieve the relevant arguments remains open. This paper proposes a radical model to assess relevance objectively at web scale: the relevance of an argument’s conclusion is decided by what other arguments reuse it as a premise. We build an argument graph for this model that we analyze with a recursive weighting scheme, adapting key ideas of PageRank. In experiments on a large ground-truth argument graph, the resulting relevance scores correlate with human average judgments. We outline what natural language challenges must be faced at web scale in order to stepwise bring argument relevance to web search engines.",
"title": ""
}
] |
[
{
"docid": "123b35d403447a29eaf509fa707eddaa",
"text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.",
"title": ""
},
{
"docid": "5fce5ef4a25f242d60aff766e1d7ba1c",
"text": "Mental toughness (MT) is an umbrella term that entails positive psychological resources, which are crucial across a wide range of achievement contexts and in the domain of mental health. We systematically review empirical studies that explored the associations between the concept of MT and individual differences in learning, educational and work performance, psychological well-being, personality, and other psychological attributes. Studies that explored the genetic and environmental contributions to individual differences in MT are also reviewed. The findings suggest that MT is associated with various positive psychological traits, more efficient coping strategies and positive outcomes in education and mental health. Approximately 50% of the variation in MT can be accounted for by genetic factors. Furthermore, the associations between MT and psychological traits can be explained mainly by either common genetic or non-shared environmental factors. Taken together, our findings suggest a 'mental toughness advantage' with possible implications for developing interventions to facilitate achievement in a variety of settings.",
"title": ""
},
{
"docid": "851fd19525da9dc5a46e3146948109df",
"text": "As computation becomes increasingly limited by data movement and energy consumption, exploiting locality throughout the memory hierarchy becomes critical for maintaining the performance scaling that many have come to expect from the computing industry. Moving computation closer to main memory presents an opportunity to reduce the overheads associated with data movement. We explore the potential of using 3D die stacking to move memory-intensive computations closer to memory. This approach to processing-in-memory addresses some drawbacks of prior research on in-memory computing and appears commercially viable in the foreseeable future. We show promising early results from this approach and identify areas that are in need of research to unlock its full potential.",
"title": ""
},
{
"docid": "85c124fd317dc7c2e5999259d26aa1db",
"text": "This paper presents a method for extracting rotation-invariant features from images of handwriting samples that can be used to perform writer identification. The proposed features are based on the Hinge feature [1], but incorporating the derivative between several points along the ink contours. Finally, we concatenate the proposed features into one feature vector to characterize the writing styles of the given handwritten text. The proposed method has been evaluated using Fire maker and IAM datasets in writer identification, showing promising performance gains.",
"title": ""
},
{
"docid": "893c7a1694596d0c8d58b819500ff9f9",
"text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.",
"title": ""
},
{
"docid": "37a838344c441bcb8bc1c1f233b2f0e7",
"text": "Cloud computing platforms enable applications to offer low latency access to user data by offering storage services in several geographically distributed data centers. In this paper, we identify the high tail latency problem in cloud CDN via analyzing a large-scale dataset collected from 783,944 users in a major cloud CDN. We find that the data downloading latency in cloud CDN is highly variable, which may significantly degrade the user experience of applications. To address the problem, we present TailCutter, a workload scheduling mechanism that aims at optimizing the tail latency while meeting the cost constraint given by application providers. We further design the Maximum Tail Minimization Algorithm (MTMA) working in TailCutter mechanism to optimally solve the Tail Latency Minimization (TLM) problem in polynomial time. We implement TailCutter across data centers of Amazon S3 and Microsoft Azure. Our extensive evaluation using large-scale real world data traces shows that TailCutter can reduce up to 68% 99th percentile user-perceived latency in comparison with alternative solutions under cost constraints.",
"title": ""
},
{
"docid": "36357f48cbc3ed4679c679dcb77bdd81",
"text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.",
"title": ""
},
{
"docid": "f3fdc63904e2bf79df8b6ca30a864fd3",
"text": "Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "0b06586502303b6796f1f512129b5cbe",
"text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.",
"title": ""
},
{
"docid": "9b6a16b84d4aadf582c16a8adb4e4830",
"text": "This paper presents a new in-vehicle real-time vehicle detection strategy which hypothesizes the presence of vehicles in rectangular sub-regions based on the robust classification of features vectors result of a combination of multiple morphological vehicle features. One vector is extracted for each region of the image likely containing vehicles as a multidimensional likelihood measure with respect to a simplified vehicle model. A supervised training phase set the representative vectors of the classes vehicle and non-vehicle, so that the hypothesis is verified or not according to the Mahalanobis distance between the feature vector and the representative vectors. Excellent results have been obtained in several video sequences accurately detecting vehicles with very different aspect-ratio, color, size, etc, while minimizing the number of missing detections and false alarms.",
"title": ""
},
{
"docid": "78ec5db757e26ce5cd1f594839169573",
"text": "Thailand and an additional Australian study Synthesis report by Vittorio di Martino 2002-Workplace violence in the health sector.doc iii Foreword Violence at work has become an alarming phenomenon worldwide. The real size of the problem is largely unknown and recent information shows that the current knowledge is only the tip of the iceberg. The enormous cost of violence at work for the individual, the workplace and the community at large is becoming more and more apparent. Although incidents of violence are known to occur in all work environments, some employment sectors are particularly exposed to it. Violence includes both physical and non-physical violence. Violence is defined as being destructive towards another person. It finds its expression in physical assault, homicide, verbal abuse, bullying, sexual harassment and threat. Violence at work is often considered to be just a reflection of the more general and increasing phenomenon of violence in many areas of social life which has to be dealt with at the level of the whole society. Its prevalence has, however, increased at the workplace, traditionally viewed as a violence-free environment. Employers and workers are equally interested in the prevention of violence at the workplace. Society at large has a stake in preventing violence spreading to working life and recognizing the potential of the workplace by removing such obstacles to productivity, development and peace. Violence is common to such an extent among workers who have direct contact with people in distress, that it may be considered an inevitable part of the job. This is often the case in the health sector (violence in this sector may constitute almost a quarter of all violence at work). 1 While ambulance staff are reported to be at greatest risk, nurses are three times more likely on average to experience violence in the workplace than other occupational groups. Since the large majority of the health workforce is female, the gender dimension of the problem is very evident. Besides concern about the human right of health workers to have a decent work environment, there is concern about the consequences of violence at work. These have a significant impact on the effectiveness of health systems, particularly in developing countries. The equal access of people to primary health care is endangered if a scarce human resource, the health workers, feel under threat in certain geographical and social environments, in situations of general conflict, in work situations where transport …",
"title": ""
},
{
"docid": "65bc99201599ec17347d3fe0857cd39a",
"text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.",
"title": ""
},
{
"docid": "f1bd4f301583725c492dcea6f1870d76",
"text": "ISSN: 1750-984X (Print) 1750-9858 (Online) Journal homepage: http://www.tandfonline.com/loi/rirs20 20 years later: deliberate practice and the development of expertise in sport Joseph Baker & Bradley Young To cite this article: Joseph Baker & Bradley Young (2014) 20 years later: deliberate practice and the development of expertise in sport, International Review of Sport and Exercise Psychology, 7:1, 135-157, DOI: 10.1080/1750984X.2014.896024 To link to this article: http://dx.doi.org/10.1080/1750984X.2014.896024 Published online: 01 Apr 2014.",
"title": ""
},
{
"docid": "c71ab03cdfd8b6a3c62b18103f449764",
"text": "BACKGROUND\nHealth worker shortage in rural areas is one of the biggest problems of the health sector in Ghana and many developing countries. This may be due to fewer incentives and support systems available to attract and retain health workers at the rural level. This study explored the willingness of community health officers (CHOs) to accept and hold rural and community job postings in Ghana.\n\n\nMETHODS\nA discrete choice experiment was used to estimate the motivation and incentive preferences of CHOs in Ghana. All CHOs working in three Health and Demographic Surveillance System sites in Ghana, 200 in total, were interviewed between December 2012 and January 2013. Respondents were asked to choose from choice sets of job preferences. Four mixed logit models were used for the estimation. The first model considered (a) only the main effect. The other models included interaction terms for (b) gender, (c) number of children under 5 in the household, and (d) years worked at the same community. Moreover, a choice probability simulation was performed.\n\n\nRESULTS\nMixed logit analyses of the data project a shorter time frame before study leave as the most important motivation for most CHOs (β 2.03; 95 % CI 1.69 to 2.36). This is also confirmed by the largest simulated choice probability (29.1 %). The interaction effect of the number of children was significant for education allowance for children (β 0.58; 95 % CI 0.24 to 0.93), salary increase (β 0.35; 95 % CI 0.03 to 0.67), and housing provision (β 0.16; 95 % CI -0.02 to 0.60). Male CHOs had a high affinity for early opportunity to go on study leave (β 0.78; 95 % CI -0.06 to 1.62). CHOs who had worked at the same place for a long time greatly valued salary increase (β 0.28; 95 % CI 0.09 to 0.47).\n\n\nCONCLUSIONS\nTo reduce health worker shortage in rural settings, policymakers could provide \"needs-specific\" motivational packages. They should include career development opportunities such as shorter period of work before study leave and financial policy in the form of salary increase to recruit and retain them.",
"title": ""
},
{
"docid": "5798d93d03b9ab2b10b5bea7ccbb58ce",
"text": "A wealth of information is available only in web pages, patents, publications etc. Extracting information from such sources is challenging, both due to the typically complex language processing steps required and to the potentially large number of texts that need to be analyzed. Furthermore, integrating extracted data with other sources of knowledge often is mandatory for subsequent analysis. In this demo, we present the AliBaba system for scalable information extraction from biomedical documents. Unlike many other systems, AliBaba performs both entity extraction and relationship extraction and graphically visualizes the resulting network of inter-connected objects. It leverages the PubMed search engine for selection of relevant documents. The technical novelty of AliBaba is twofold: (a) its ability to automatically learn language patterns for relationship extraction without an annotated corpus, and (b) its high performance pattern matching algorithm. We show that a simple yet effective pattern filtering technique improves the runtime of the system drastically without harming its extraction effectiveness. Although AliBaba has been implemented for biomedical texts, its underlying principles should also be applicable in any other domain.",
"title": ""
},
{
"docid": "e5aed574fbe4560a794cf8b77fb84192",
"text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.",
"title": ""
},
{
"docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a",
"text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.",
"title": ""
},
{
"docid": "79811b3cfec543470941e9529dc0ab24",
"text": "We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate representation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support informationsharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attributebased approach to affordance prediction.",
"title": ""
},
{
"docid": "c380f89ac91ce532b9f0250ce487fe5e",
"text": "Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics. Traditional methods based on hand-crafted features and traditional machine learning techniques have recently been superseded by deep neural networks trained with very large datasets. In this paper we provide a comprehensive and upto-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, featurebased and hybrid methods) and deep learning methods.",
"title": ""
}
] |
scidocsrr
|
f2c9ea56e9dd7f0f4eb93cfcc7bf50e2
|
Feasibility Investigation of Low Cost Substrate Integrated Waveguide ( SIW ) Directional Couplers
|
[
{
"docid": "39e332a58625a12ef3e14c1a547a8cad",
"text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.",
"title": ""
}
] |
[
{
"docid": "c25ed65511cb0a22301896bbf4ebd84d",
"text": "This paper surveys the field of machine vision from a computer science perspective. It is written to act as an introduction to the field and presents the reader with references to specific implementations. Machine vision is a complex and developing field that can be broken into the three stages: stereo correspondence, scene reconstruction, and object recognition. We present the techniques and general approaches to each of these stages and summarize the future direction of research.",
"title": ""
},
{
"docid": "89b8f3b7efa011065cf28647b9984f4d",
"text": "Due to the abundance of 2D product images from the internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications. Recent works have addressed the single-view depth estimation problem by utilizing convolutional neural networks. In this paper, we show that exploring symmetry information, which is ubiquitous in man made objects, can significantly boost the quality of such depth predictions. Specifically, we propose a new convolutional neural network architecture to first estimate dense symmetric correspondences in a product image and then propose an optimization which utilizes this information explicitly to significantly improve the quality of single-view depth estimations. We have evaluated our approach extensively, and experimental results show that this approach outperforms state-of-the-art depth estimation techniques.",
"title": ""
},
{
"docid": "6871d514bca855a9f948939a3e8a02f7",
"text": "The problem of tracking targets in the presence of reflections from sea or ground is addressed. Both types of reflections (specular and diffuse) are considered. Specular reflection causes large peak errors followed by an approximately constant bias in the monopulse ratio, while diffuse reflection has random variations which on the average generate a bias in the monopulse ratio. Expressions for the average error (bias) in the monopulse ratio due to specular and diffuse reflections and the corresponding variance in the presence of noise in the radar channels are derived. A maximum maneuver-based filter and a multiple model estimator are used for tracking. Simulation results for five scenarios, typical of sea skimmers, with Swerling III fluctuating radar cross sections (RCSs) indicate the significance and efficiency of the technique developed in this paper-a 65% reduction of the rms error in the target height estimate.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "4c8ff8cf19292475b724d7036ed8b75c",
"text": "The purpose of this study was to examine intratester reliability of a test designed to measure the standing pelvic-tilt angle, active posterior and anterior pelvic-tilt angles and ranges of motion, and the total pelvic-tilt range of motion (ROM). After an instruction session, the pelvic-tilt angles of the right side of 20 men were calculated using trigonometric functions. Ranges of motion were determined from the pelvic-tilt angles. Intratester reliability coefficients (Pearson r) for test and retest measurements were .88 for the standing pelvic-tilt angle, .88 for the posterior pelvic-tilt angle, .92 for the anterior pelvic-tilt angle, .62 for the posterior pelvic-tilt ROM, .92 for the anterior pelvic-tilt ROM, and .87 for the total ROM. We discuss the factors that may have influenced the reliability of the measurements and the clinical implications and limitations of the test. We suggest additional research to examine intratester reliability of measuring the posterior pelvic-tilt ROM, intertester reliability of measuring all angles and ROM, and the pelvic tilt of many types of subjects.",
"title": ""
},
{
"docid": "962831a1fa8771c68feb894dc2c63943",
"text": "San-Francisco in the US and Natal in Brazil are two coastal cities which are known rather for its tech scene and natural beauty than for its criminal activities. We analyze characteristics of the urban environment in these two cities, deploying a machine learning model to detect categories and hotspots of criminal activities. We propose an extensive set of spatio-temporal & urban features which can significantly improve the accuracy of machine learning models for these tasks, one of which achieved Top 1% performance on a Crime Classification Competition by kaggle.com. Extensive evaluation on several years of crime records from both cities show how some features — such as the street network — carry important information about criminal activities.",
"title": ""
},
{
"docid": "5bc8c2bc2a0ac668c256ad802f191288",
"text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.",
"title": ""
},
{
"docid": "b845aaa999c1ed9d99cb9e75dff11429",
"text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.",
"title": ""
},
{
"docid": "5deed8c53f2b28f23d8f06cdc446209a",
"text": "Natural Language Inference (NLI) is a fundamentally important task in natural language processing that has many applications. It is concerned with classifying the logical relation between two sentences. In this paper, we propose attention memory networks (AMNs) to recognize entailment and contradiction between two sentences. In our model, an attention memory neural network (AMNN) has a variable sized encoding memory and supports semantic compositionality. AMNN captures sentence level semantics and reasons relation between the sentence pairs; then we use a Sparsemax layer over the output of the generated matching vectors (sentences) for classification. Our experiments on the Stanford Natural Language Inference (SNLI) Corpus show that our model outperforms the state of the art, achieving an accuracy of 87.4% on the test data.",
"title": ""
},
{
"docid": "77bb711327befd3f4169b4548cc5a85d",
"text": "We present a new technique for learning visual-semantic embeddings for cross-modal retrieval. Inspired by hard negative mining, the use of hard negatives in structured prediction, and ranking loss functions, we introduce a simple change to common loss functions used for multi-modal embeddings. That, combined with fine-tuning and use of augmented data, yields significant gains in retrieval performance. We showcase our approach, VSE++, on MS-COCO and Flickr30K datasets, using ablation studies and comparisons with existing methods. On MS-COCO our approach outperforms state-ofthe-art methods by 8.8% in caption retrieval and 11.3% in image retrieval (at R@1).",
"title": ""
},
{
"docid": "dd40063dd10027f827a65976261c8683",
"text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.",
"title": ""
},
{
"docid": "df0e13e1322a95046a91fb7c867d968a",
"text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.",
"title": ""
},
{
"docid": "cb61f83a28b87a4974bc53c92bb72cfc",
"text": "The main issue of a billet heater using induction heating is to avoid billets that were not heated at a desired temperature. In order to improve the induction heating system, it is necessary to clarify the heating property of an object due to eddy current loss and to investigate the temperature distribution in an object by the magneto-thermal coupled analysis. In this paper, the eddy current and temperature distribution of a billet heater is analyzed considering the heat emission, heat conduction, and temperature dependence of magnetic characteristics of the billet. It is shown that the calculated values of temperature in the center and surface of a billet are in good agreement with measured values. The precise analysis is possible by considering the temperature dependence of magnetic characteristics, heat conductivity, etc. The detailed behavior of the heat generation in the billet is clarified. The skin depth is increased because the resistivity of the billet is increased and the permeability is decreased at high temperature. As a result, the flux in the billet is reduced, and then the power (eddy current loss) in the billet is decreased.",
"title": ""
},
{
"docid": "ce21a811ea260699c18421d99221a9f2",
"text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.",
"title": ""
},
{
"docid": "4ac8435b96c020231c775c4625b5ff0a",
"text": "This article addresses the issue of student writing in higher education. It draws on the findings of an Economic and Social Research Council funded project which examined the contrasting expectations and interpretations of academic staff and students regarding undergraduate students' written assignments. It is suggested that the implicit models that have generally been used to understand student writing do not adequately take account of the importance of issues of identity and the institutional relationships of power and authority that surround, and are embedded within, diverse student writing practices across the university. A contrasting and therefore complementary perspective is used to present debates about 'good' and `poor' student writing. The article outlines an 'academic literacies' framework which can take account of the conflicting and contested nature of writing practices, and may therefore be more valuable for understanding student writing in today's higher education than traditional models and approaches.",
"title": ""
},
{
"docid": "04d9f96fcd218e61f41412518c18cf31",
"text": "Squeak is an open, highly-portable Smalltalk implementation whose virtual machine is written entirely in Smalltalk, making it easy to. debug, analyze, and change. To achieve practical performance, a translator produces an equivalent C program whose performance is comparable to commercial Smalltalks.Other noteworthy aspects of Squeak include: a compact object format that typically requires only a single word of overhead per object; a simple yet efficient incremental garbage collector for 32-bit direct pointers; efficient bulk-mutation of objects; extensions of BitBlt to handle color of any depth and anti-aliased image rotation and scaling; and real-time sound and music synthesis written entirely in Smalltalk.",
"title": ""
},
{
"docid": "de2294753031935ca4729a729ac23283",
"text": "We propose a novel system TEXplorer that integrates keyword-based object ranking with the aggregation and exploration power of OLAP in a text database with rich structured attributes available, e.g., a product review database. TEXplorer can be implemented within a multi-dimensional text database, where each row is associated with structural dimensions (attributes) and text data (e.g., a document). The system utilizes the text cube data model, where a cell aggregates a set of documents with matching values in a subset of dimensions. Cells in a text cube capture different levels of summarization of the documents, and can represent objects at different conceptual levels.\n Users query the system by submitting a set of keywords. Instead of returning a ranked list of all the cells, we propose a keyword-based interactive exploration framework that could offer flexible OLAP navigational guides and help users identify the levels and objects they are interested in. A novel significance measure of dimensions is proposed based on the distribution of IR relevance of cells. During each interaction stage, dimensions are ranked according to their significance scores to guide drilling down; and cells in the same cuboids are ranked according to their relevance to guide exploration. We propose efficient algorithms and materialization strategies for ranking top-k dimensions and cells. Finally, extensive experiments on real datasets demonstrate the efficiency and effectiveness of our approach.",
"title": ""
},
{
"docid": "b55a314aea8914db8705cd3974c862bb",
"text": "This study examines the mediating effect of perceived usefulness on the relationship between tax service quality (correctness, response time, system support) and continuance usage intention of e-filing system in Malaysia. A total of 116 data was analysed using Partial Least Squared Method (PLS). The result showed that Perceived Usefulness has a partial mediating effect on the relationship between tax service quality (Correctness, Response Time) with the continuance usage intention and tax service quality (correctness) has significant positive relationship with continuance usage intention. Perceived usefulness was found to be the most important predictor of continuance usage intention.",
"title": ""
},
{
"docid": "030d09fd465d76f96cea06ff4f4ed24e",
"text": "Several large technology companies including Apple, Google, and Samsung are entering the expanding market of population health with the introduction of wearable devices. This technology, worn in clothing or accessories, is part of a larger movement often referred to as the “quantified self.” The notion is that by recording and reporting information about behaviors such as physical activity or sleep patterns, these devices can educate and motivate individuals toward better habits and better health. The gap between recording information and changing behavior is substantial, however, and while these devices are increasing in popularity, little evidence suggests that they are bridging that gap. Only 1% to 2% of individuals in the United States have used a wearable device, but annual sales are projected to increase to more than $50 billion by 2018.1 Some of these devices aim at individuals already motivated to change their health behaviors. Others are being considered by health care organizations, employers, insurers, and clinicians who see promise in using these devices to better engage less motivated individuals. Some of these devices may justify that promise, but less because of their technology and more because of the behavioral change strategies that can be designed around them. Most health-related behaviors such as eating well and exercising regularly could lead to meaningful improvements in population health only if they are sustained. If wearable devices are to be part of the solution, they either need to create enduring new habits, turning external motivations into internal ones (which is difficult), or they need to sustain their external motivation (which is also difficult). This requirement of sustained behavior change is a major challenge, but many mobile health applications have not yet leveraged principles from theories of health behavior.2 Feedback loops could be better designed around wearable devices to sustain engagement by using concepts from behavioral economics.3 Individuals are often motivated by the experience of past rewards and the prospect of future rewards. Lottery-based designs leverage the fact that individuals tend to assign undue weight to small probabilities and are more engaged by intermittent variable rewards than with constant reinforcement. Anticipated regret, an individual’s concern or anxiety over the reward he or she might not win, can have a significant effect on decision making. Feedback could be designed to use this concept by informing individuals what they would have won had they been adherent to the new behavior. Building new habits may be best facilitated by presenting frequent feedback with appropriate framing and by using a trigger that captures the individual’s attention at those moments when he or she is most likely to take action. Identifying and Addressing the Gaps Using wearable devices to effectively promote health behavior change is a complex, multistep process. First, a person must be motivated enough to want a device and be able to afford it; this is a challenge, because some devices can cost hundreds of dollars. Perhaps for these reasons, wearable devices seem to appeal to groups that might need them least. In a survey of wearable device users, 75% described themselves as “early adopters of technology,” 48% were younger than 35 years, and 29% reportedly earn more than $100 000 annually.4 The individuals who might have the most to gain from these devices are likely to be older and less affluent. To better engage these individuals, wearable devices must be more affordable, or new funding mechanisms are needed. For example, employers and insurers might pay for a device that helps individuals better adhere to their medications, potentially yielding significant downstream health care savings. Or, devices that demonstrate effectiveness could be financed in a manner similar to that for prescription drugs. Second, once a device is acquired, a person needs to remember to wear it and occasionally recharge it— additional behaviors required from individuals who may have a difficult time already. Many wearable devices require data to be sent to a phone or computer, adding additional steps and more equipment. According to one survey (n = 6223), more than half of individuals who purchased a wearable device stop using it and, of these, onethird did so before 6 months.5 One potential solution might be to better leverage smartphones; most people with these phones carry them often. Ideally, using a smartphone does not require any effort beyond setup— like an app that gets its power from the phone that people are already accustomed to regularly charging. Because data can be transmitted passively via a cellular connection, there is no need for individuals to actively update their data. Although smartphones are expensive, many people already have them, and the reach of these devices is increasing. Third, the device must be able to accurately track its targeted behavior. Accelerometers, commonly found within wearable devices, have been well studied for tracking step counts. However, newer technologies, such as those that measure heart rate or sleep patterns, have not been well validated. Similar to mobile health applications, the increase in the availability and types of wearable devices has not been matched by appropriate testing or oversight to make sure they work.6 Wearable devices are unlikely to have the same capabilities as home devices that measure blood pressure or track medication adherence. However, a smartwatch may facilitate feedback from these devices, forming a better VIEWPOINT",
"title": ""
},
{
"docid": "8415585161d51b500f99aa36650a67d9",
"text": "A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering.",
"title": ""
}
] |
scidocsrr
|
e4c19c7df02bdc0ed409bdf36d5d8066
|
Self-Presentation and Deception Looks and Lies : The Role of Physical Attractiveness in Online Dating
|
[
{
"docid": "62c93d1c3033208a609e4fc14a42a493",
"text": "Evolutionary-related hypotheses about gender differences in mate selection preferences were derived from Triver's parental investment model, which contends that women are more likely than men to seek a mate who possesses nonphysical characteristics that maximize the survival or reproductive prospects of their offspring, and were examined in a meta-analysis of mate selection research (questionnaire studies, analyses of personal advertisements). As predicted, women accorded more weight than men to socioeconomic status, ambitiousness, character, and intelligence, and the largest gender differences were observed for cues to resource acquisition (status, ambitiousness). Also as predicted, gender differences were not found in preferences for characteristics unrelated to progeny survival (sense of humor, \"personality\"). Where valid comparisons could be made, the findings were generally invariant across generations, cultures, and research paradigms.",
"title": ""
},
{
"docid": "51a859f71bd2ec82188826af18204f02",
"text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.",
"title": ""
},
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
},
{
"docid": "7440cb90073c8d8d58e28447a1774b2c",
"text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.",
"title": ""
}
] |
[
{
"docid": "affbc18a3ba30c43959e37504b25dbdc",
"text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.",
"title": ""
},
{
"docid": "a5d16384d928da7bcce7eeac45f59e2e",
"text": "Innovative rechargeable batteries that can effectively store renewable energy, such as solar and wind power, urgently need to be developed to reduce greenhouse gas emissions. All-solid-state batteries with inorganic solid electrolytes and electrodes are promising power sources for a wide range of applications because of their safety, long-cycle lives and versatile geometries. Rechargeable sodium batteries are more suitable than lithium-ion batteries, because they use abundant and ubiquitous sodium sources. Solid electrolytes are critical for realizing all-solid-state sodium batteries. Here we show that stabilization of a high-temperature phase by crystallization from the glassy state dramatically enhances the Na(+) ion conductivity. An ambient temperature conductivity of over 10(-4) S cm(-1) was obtained in a glass-ceramic electrolyte, in which a cubic Na(3)PS(4) crystal with superionic conductivity was first realized. All-solid-state sodium batteries, with a powder-compressed Na(3)PS(4) electrolyte, functioned as a rechargeable battery at room temperature.",
"title": ""
},
{
"docid": "98d766b3756d1fe6634996fd91169c19",
"text": "Kratom (Mitragyna speciosa) is a widely abused herbal drug preparation in Southeast Asia. It is often consumed as a substitute for heroin, but imposing itself unknown harms and addictive burdens. Mitragynine is the major psychostimulant constituent of kratom that has recently been reported to induce morphine-like behavioural and cognitive effects in rodents. The effects of chronic consumption on non-drug related behaviours are still unclear. In the present study, we investigated the effects of chronic mitragynine treatment on spontaneous activity, reward-related behaviour and cognition in mice in an IntelliCage® system, and compared them with those of morphine and Δ-9-tetrahydrocannabinol (THC). We found that chronic mitragynine treatment significantly potentiated horizontal exploratory activity. It enhanced spontaneous sucrose preference and also its persistence when the preference had aversive consequences. Furthermore, mitragynine impaired place learning and its reversal. Thereby, mitragynine effects closely resembled that of morphine and THC sensitisation. These findings suggest that chronic mitragynine exposure enhances spontaneous locomotor activity and the preference for natural rewards, but impairs learning and memory. These findings confirm pleiotropic effects of mitragynine (kratom) on human lifestyle, but may also support the recognition of the drug's harm potential.",
"title": ""
},
{
"docid": "3ab85b8f58e60f4e59d6be49648ce290",
"text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.",
"title": ""
},
{
"docid": "9f8e9c5e617db7f4281f0a20f5527c70",
"text": "We have developed a normally-off GaN-based transistor using conductivity modulation, which we call a gate injection transistor (GIT). This new device principle utilizes hole-injection from the p-AlGaN to the AlGaN/GaN heterojunction, which simultaneously increases the electron density in the channel, resulting in a dramatic increase of the drain current owing to the conductivity modulation. The fabricated GIT exhibits a threshold voltage of 1.0 V with a maximum drain current of 200 mA/mm, in which a forward gate voltage of up to 6 V can be applied. The obtained specific ON-state resistance (RON . A) and the OFF-state breakdown voltage (BV ds) are 2.6 mOmega . cm2 and 800 V, respectively. The developed GIT is advantageous for power switching applications.",
"title": ""
},
{
"docid": "00c19e68020aff7fd86aa7e514cc0668",
"text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.",
"title": ""
},
{
"docid": "b11331341448f108fb1b503ab8ecd7b8",
"text": "Repairing defects of the auricle requires an appreciation of the underlying 3-dimensional framework, the flexible properties of the cartilages, and the healing contractile tendencies of the surrounding soft tissue. In the analysis of auricular defects and planning of their reconstruction, it is helpful to divide the auricle into subunits for which different techniques may offer better functional and aesthetic outcomes. This article reviews many of the reconstructive options for defects of the various auricular subunits.",
"title": ""
},
{
"docid": "2f9de2e94c6af95e9c2e9eb294a7696c",
"text": "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.",
"title": ""
},
{
"docid": "e43242ed17a0b2fa9fca421179135ce1",
"text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",
"title": ""
},
{
"docid": "b7b2f1c59dfc00ab6776c6178aff929c",
"text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.",
"title": ""
},
{
"docid": "18dd421bb233c1de8dd56674bacfe521",
"text": "The coordination of directional overcurrent relays (DOCR) is treated in this paper using particle swarm optimization (PSO), a recently proposed optimizer that utilizes the swarm behavior in searching for an optimum. PSO gained a lot of interest for its simplicity, robustness, and easy implementation. The problem of setting DOCR is a highly constrained optimization problem that has been stated and solved as a linear programming (LP) problem. To deal with such constraints a modification to the standard PSO algorithm is introduced. Three case studies are presented, and the results are compared to those of LP technique to demonstrate the effectiveness of the proposed methodology.",
"title": ""
},
{
"docid": "6b6099ee6f04f1b490b7e483de3087ff",
"text": "International Electrotechnical Commission (IEC) standard 61850 proposes the Ethernet-based communication networks for protection and automation within the power substation. Major manufacturers are currently developing products for the process bus in compliance with IEC 61850 part 9-2. For the successful implementation of the IEC 61850-9-2 process bus, it is important to analyze the performance of time-critical messages for the substation protection and control functions. This paper presents the performance evaluation of the IEC 61850-9-2 process bus for a typical 345 kV/230 kV substation by studying the time-critical sampled value messages delay and loss by using the OPNET simulation tool in the first part of this paper. In the second part, this paper presents a corrective measure to address the issues with the several sampled value messages lost and/or delayed by proposing the sampled value estimation algorithm for any digital substation relaying. Finally, the proposed sampled value estimation algorithm has been examined for various power system scenarios with the help of PSCAD/EMTDC and MATLAB simulation tools.",
"title": ""
},
{
"docid": "e4944af5f589107d1b42a661458fcab5",
"text": "This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014. Mobile Augmented Reality has largely evolved over the last decade, as well as the interpretation itself of what is Mobile Augmented Reality. The first instance of Mobile AR can certainly be associated with the development of wearable AR, in a sense of experiencing AR during locomotion (mobile as a motion). With the transformation and miniaturization of physical devices and displays, the concept of mobile AR evolved towards the notion of ”mobile device”, aka AR on a mobile device. In this history of mobile AR we considered both definitions and the evolution of the term over time. Major parts of the list were initially compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2009 (author list in alphabetical order) for the ISMAR society. More recent work was added in 2013 and during preparation of this report. Permission is granted to copy and modify. Please email the first author if you find any errors.",
"title": ""
},
{
"docid": "5af83f822ac3d9379c7b477ff1d32a97",
"text": "Sprout is an end-to-end transport protocol for interactive applications that desire high throughput and low delay. Sprout works well over cellular wireless networks, where link speeds change dramatically with time, and current protocols build up multi-second queues in network gateways. Sprout does not use TCP-style reactive congestion control; instead the receiver observes the packet arrival times to infer the uncertain dynamics of the network path. This inference is used to forecast how many bytes may be sent by the sender, while bounding the risk that packets will be delayed inside the network for too long. In evaluations on traces from four commercial LTE and 3G networks, Sprout, compared with Skype, reduced self-inflicted end-to-end delay by a factor of 7.9 and achieved 2.2× the transmitted bit rate on average. Compared with Google’s Hangout, Sprout reduced delay by a factor of 7.2 while achieving 4.4× the bit rate, and compared with Apple’s Facetime, Sprout reduced delay by a factor of 8.7 with 1.9× the bit rate. Although it is end-to-end, Sprout matched or outperformed TCP Cubic running over the CoDel active queue management algorithm, which requires changes to cellular carrier equipment to deploy. We also tested Sprout as a tunnel to carry competing interactive and bulk traffic (Skype and TCP Cubic), and found that Sprout was able to isolate client application flows from one another.",
"title": ""
},
{
"docid": "66423bc00bb724d1d0c616397d898dd0",
"text": "Background\nThere is a growing trend for patients to seek the least invasive treatments with less risk of complications and downtime for facial rejuvenation. Thread embedding acupuncture has become popular as a minimally invasive treatment. However, there is little clinical evidence in the literature regarding its effects.\n\n\nMethods\nThis single-arm, prospective, open-label study recruited participants who were women aged 40-59 years, with Glogau photoaging scale III-IV. Fourteen participants received thread embedding acupuncture one time and were measured before and after 1 week from the procedure. The primary outcome was a jowl to subnasale vertical distance. The secondary outcomes were facial wrinkle distances, global esthetic improvement scale, Alexiades-Armenakas laxity scale, and patient-oriented self-assessment scale.\n\n\nResults\nFourteen participants underwent thread embedding acupuncture alone, and 12 participants revisited for follow-up outcome measures. For the primary outcome measure, both jowls were elevated in vertical height by 1.87 mm (left) and 1.43 mm (right). Distances of both melolabial and nasolabial folds showed significant improvement. In the Alexiades-Armenakas laxity scale, each evaluator evaluated for four and nine participants by 0.5 grades improved. In the global aesthetic improvement scale, improvement was graded as 1 and 2 in nine and five cases, respectively. The most common adverse events were mild bruising, swelling, and pain. However, adverse events occurred, although mostly minor and of short duration.\n\n\nConclusion\nIn this study, thread embedding acupuncture showed clinical potential for facial wrinkles and laxity. However, further large-scale trials with a controlled design and objective measurements are needed.",
"title": ""
},
{
"docid": "5be35d2aa81cc1e15b857892f376fbf0",
"text": "This paper proposes a new method for fabric defect classification by incorporating the design of a wavelet frames based feature extractor with the design of a Euclidean distance based classifier. Channel variances at the outputs of the wavelet frame decomposition are used to characterize each nonoverlapping window of the fabric image. A feature extractor using linear transformation matrix is further employed to extract the classification-oriented features. With a Euclidean distance based classifier, each nonoverlapping window of the fabric image is then assigned to its corresponding category. Minimization of the classification error is achieved by incorporating the design of the feature extractor with the design of the classifier based on minimum classification error (MCE) training method. The proposed method has been evaluated on the classification of 329 defect samples containing nine classes of fabric defects, and 328 nondefect samples, where 93.1% classification accuracy has been achieved.",
"title": ""
},
{
"docid": "f93dac471e3d7fa79c740b35fbde0558",
"text": "In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsu-pervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.",
"title": ""
},
{
"docid": "14dc7c8065adad3fc3c67f5a8e35298b",
"text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions",
"title": ""
},
{
"docid": "ed23845ded235d204914bd1140f034c3",
"text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao (yulingjiaomath@whu.edu.cn) †Can Yang (macyang@ust.hk) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9",
"title": ""
},
{
"docid": "503101a7b0f923f8fecb6dc9bb0bde37",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
}
] |
scidocsrr
|
47bffba44fe4f14cc440205f9f574c1b
|
AckSeer: a repository and search engine for automatically extracted acknowledgments from digital libraries
|
[
{
"docid": "2210176bcb0f139e3f7f7716447f3920",
"text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.",
"title": ""
},
{
"docid": "4eaf40cdef12d0d2be1d3c6a96c94841",
"text": "Acknowledgements in research publications, like citations, indicate influential contributions to scientific work; however, large-scale acknowledgement analyses have traditionally been impractical due to the high cost of manual information extraction. In this paper we describe a mixture method for automatically mining acknowledgements from research documents using a combination of a Support Vector Machine and regular expressions. The algorithm has been implemented as a plug-in to the CiteSeer Digital Library and the extraction results have been integrated with the traditional metadata and citation index of the CiteSeer system. As a demonstration, we use CiteSeer's autonomous citation indexing (ACI) feature to measure the relative impact of acknowledged entities, and present the top twenty acknowledged entities within the archive.",
"title": ""
}
] |
[
{
"docid": "cd811b8c1324ca0fef6a25e1ca5c4ce9",
"text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.",
"title": ""
},
{
"docid": "3b4ad43c44d824749da5487b34f31291",
"text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamical activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.",
"title": ""
},
{
"docid": "9a7e491e4d4490f630b55a94703a6f00",
"text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.",
"title": ""
},
{
"docid": "db9f0c0ab08b07ac3b05d97e580c4aae",
"text": "Our objective is to identify requirements (i.e., quality attributes and functional requirements) for software visualization tools. We especially focus on requirements for research tools that target the domains of visualization for software maintenance, reengineering, and reverse engineering. The requirements are identified with a comprehensive literature survey based on relevant publications in journals, conference proceedings, and theses. The literature survey has identified seven quality attributes (i.e., rendering scalability, information scalability, interoperability, customizability, interactivity, usability, and adoptability) and seven functional requirements (i.e., views, abstraction, search, filters, code proximity, automatic layouts, and undo/history). The identified requirements are useful for researchers in the software visualization field to build and evaluate tools, and to reason about the domain of software visualization.",
"title": ""
},
{
"docid": "295decfc6cbfe44ee20455fd551c0a45",
"text": "Ultraviolet (UV) photodetectors have drawn extensive attention owing to their applications in industrial, environmental and even biological fields. Compared to UV-enhanced Si photodetectors, a new generation of wide bandgap semiconductors, such as (Al, In) GaN, diamond, and SiC, have the advantages of high responsivity, high thermal stability, robust radiation hardness and high response speed. On the other hand, one-dimensional (1D) nanostructure semiconductors with a wide bandgap, such as β-Ga2O3, GaN, ZnO, or other metal-oxide nanostructures, also show their potential for high-efficiency UV photodetection. In some cases such as flame detection, high-temperature thermally stable detectors with high performance are required. This article provides a comprehensive review on the state-of-the-art research activities in the UV photodetection field, including not only semiconductor thin films, but also 1D nanostructured materials, which are attracting more and more attention in the detection field. A special focus is given on the thermal stability of the developed devices, which is one of the key characteristics for the real applications.",
"title": ""
},
{
"docid": "65580dfc9bdf73ef72b6a133ab19ccdd",
"text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.",
"title": ""
},
{
"docid": "035b2296835a9c4a7805ba446760071e",
"text": "Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network. This paper proposes the development of an Intrusion Detection Program (IDP) which could detect known attack patterns. An IDP does not eliminate the use of any preventive mechanism but it works as the last defensive mechanism in securing the system. Three variants of genetic programming techniques namely Linear Genetic Programming (LGP), Multi-Expression Programming (MEP) and Gene Expression Programming (GEP) were evaluated to design IDP. Several indices are used for comparisons and a detailed analysis of MEP technique is provided. Empirical results reveal that genetic programming technique could play a major role in developing IDP, which are light weight and accurate when compared to some of the conventional intrusion detection systems based on machine learning paradigms.",
"title": ""
},
{
"docid": "ae8e043f980d313499433d49aa90467c",
"text": "During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.",
"title": ""
},
{
"docid": "78d7c61f7ca169a05e9ae1393712cd69",
"text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.",
"title": ""
},
{
"docid": "0e2d5444d16f7c710039f6145473131c",
"text": "In this paper, a novel design approach for the development of robot hands is presented. This approach, that can be considered alternative to the “classical” one, takes into consideration compliant structures instead of rigid ones. Compliance effects, which were considered in the past as a “defect” to be mechanically eliminated, can be viceversa regarded as desired features and can be properly controlled in order to achieve desired properties from the robotic device. In particular, this is true for robot hands, where the mechanical complexity of “classical” design solutions has always originated complicated structures, often with low reliability and high costs. In this paper, an alternative solution to the design of dexterous robot hand is illustrated, considering a “mechatronic approach” for the integration of the mechanical structure, the sensory and electronic system, the control and the actuation part. Moreover, the preliminary experimental activity on a first prototype is reported and discussed. The results obtained so far, considering also reliability, costs and development time, are very encouraging, and allows to foresee a wider diffusion of dextrous hands for robotic applications.",
"title": ""
},
{
"docid": "21ca1c1fce82a764e9dc7b31e11cb0fa",
"text": "We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate “fewshot” models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the “body”, and from the “body” to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.",
"title": ""
},
{
"docid": "b740f07b95041e764bfe8cb5a59b14a8",
"text": "We present in this paper a statistical model for languageindependent bi-directional conversion between spelling and pronunciation, based on joint grapheme/phoneme units extracted from automatically aligned data. The model is evaluated on spelling-to-pronunciation and pronunciation-tospelling conversion on the NetTalk database and the CMU dictionary. We also study the effect of including lexical stress in the pronunciation. Although a direct comparison is difficult to make, our model’s performance appears to be as good or better than that of other data-driven approaches that have been applied to the same tasks.",
"title": ""
},
{
"docid": "26787002ed12cc73a3920f2851449c5e",
"text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.",
"title": ""
},
{
"docid": "60d90ae1407c86559af63f20536202dc",
"text": "TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol.",
"title": ""
},
{
"docid": "b8f23ec8e704ee1cf9dbe6063a384b09",
"text": "The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2)",
"title": ""
},
{
"docid": "4bfb6e5b039dd434e0c8aed461536acf",
"text": "In many applications transactions between the elements of an information hierarchy occur over time. For example, the product offers of a department store can be organized into product groups and subgroups to form an information hierarchy. A market basket consisting of the products bought by a customer forms a transaction. Market baskets of one or more customers can be ordered by time into a sequence of transactions. Each item in a transaction is associated with a measure, for example, the amount paid for a product.\n In this paper we present a novel method for visualizing sequences of these kinds of transactions in information hierarchies. It uses a tree layout to draw the hierarchy and a timeline to represent progression of transactions in the hierarchy. We have developed several interaction techniques that allow the users to explore the data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from several very different application domains.",
"title": ""
},
{
"docid": "7a3573bfb32dc1e081d43fe9eb35a23b",
"text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.",
"title": ""
},
{
"docid": "43831e29e62c574a93b6029409690bfe",
"text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.",
"title": ""
},
{
"docid": "fb1724b8baf76ceec32647fc6e5f2039",
"text": "The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.",
"title": ""
},
{
"docid": "d4d24bee47b97e1bf4aadad0f3993e78",
"text": "An aircraft landed safely is the result of a huge organizational effort required to cope with a complex system made up of humans, technology and the environment. The aviation safety record has improved dramatically over the years to reach an unprecedented low in terms of accidents per million take-offs, without ever achieving the “zero accident” target. The introduction of automation on board airplanes must be acknowledged as one of the driving forces behind the decline in the accident rate down to the current level.",
"title": ""
}
] |
scidocsrr
|
b61352b48264876b641fe9f23310e6df
|
Terrorism Event Classification Using Fuzzy Inference Systems
|
[
{
"docid": "08634303d285ec95873e003eeac701eb",
"text": "This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Five types of EEG signals were used as input patterns of the five ANFIS classifiers. To improve diagnostic accuracy, the sixth ANFIS classifier (combining ANFIS) was trained using the outputs of the five ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the EEG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the EEG signals.",
"title": ""
}
] |
[
{
"docid": "ac5c015aa485084431b8dba640f294b5",
"text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.",
"title": ""
},
{
"docid": "fb426b89d1a65c597d190582393254eb",
"text": "The amount of data of all kinds available electronically has increased dramatically in recent years. The data resides in di erent forms, ranging from unstructured data in le systems to highly structured in relational database systems. Data is accessible through a variety of interfaces including Web browsers, database query languages, application-speci c interfaces, or data exchange formats. Some of this data is raw data, e.g., images or sound. Some of it has structure even if the structure is often implicit, and not as rigid or regular as that found in standard database systems. Sometimes the structure exists but has to be extracted from the data. Sometimes also it exists but we prefer to ignore it for certain purposes such as browsing. We call here semi-structured data this data that is (from a particular viewpoint) neither raw data nor strictly typed, i.e., not table-oriented as in a relational model or sorted-graph as in object databases. As will seen later when the notion of semi-structured data is more precisely de ned, the need for semi-structured data arises naturally in the context of data integration, even when the data sources are themselves well-structured. Although data integration is an old topic, the need to integrate a wider variety of dataformats (e.g., SGML or ASN.1 data) and data found on the Web has brought the topic of semi-structured data to the forefront of research. The main purpose of the paper is to isolate the essential aspects of semistructured data. We also survey some proposals of models and query languages for semi-structured data. In particular, we consider recent works at Stanford U. and U. Penn on semi-structured data. In both cases, the motivation is found in the integration of heterogeneous data. The \\lightweight\" data models they use (based on labelled graphs) are very similar. As we shall see, the topic of semi-structured data has no precise boundary. Furthermore, a theory of semi-structured data is still missing. We will try to highlight some important issues in this context. The paper is organized as follows. In Section 2, we discuss the particularities of semi-structured data. In Section 3, we consider the issue of the data structure and in Section 4, the issue of the query language.",
"title": ""
},
{
"docid": "e8a144ec1c58f8fa07b518a754d97fc7",
"text": "Smart Cities appeared in literature in late ‘90s and various approaches have been developed so far. Until today, smart city does not describe a city with particular attributes but it is used to describe different cases in urban spaces: web portals that virtualize cities or city guides; knowledge bases that address local needs; agglomerations with Information and Communication Technology (ICT) infrastructure that attract business relocation; metropolitan-wide ICT infrastructures that deliver e-services to the citizens; ubiquitous environments; and recently ICT infrastructure for ecological use. Researchers, practicians, businessmen and policy makers consider smart city from different perspectives and most of them agree on a model that measures urban economy, mobility, environment, living, people and governance. On the other hand, ICT and construction industries stress to capitalize smart city and a new market seems to be generated in this domain. This chapter aims to perform a literature review, discover and classify the particular schools of thought, universities and research centres as well as companies that deal with smart city domain and discover alternative approaches, models, architecture and frameworks with this regard.",
"title": ""
},
{
"docid": "4597ab07ac630eb5e256f57530e2828e",
"text": "This paper presents novel QoS extensions to distributed control plane architectures for multimedia delivery over large-scale, multi-operator Software Defined Networks (SDNs). We foresee that large-scale SDNs shall be managed by a distributed control plane consisting of multiple controllers, where each controller performs optimal QoS routing within its domain and shares summarized (aggregated) QoS routing information with other domain controllers to enable inter-domain QoS routing with reduced problem dimensionality. To this effect, this paper proposes (i) topology aggregation and link summarization methods to efficiently acquire network topology and state information, (ii) a general optimization framework for flow-based end-to-end QoS provision over multi-domain networks, and (iii) two distributed control plane designs by addressing the messaging between controllers for scalable and secure inter-domain QoS routing. We apply these extensions to streaming of layered videos and compare the performance of different control planes in terms of received video quality, communication cost and memory overhead. Our experimental results show that the proposed distributed solution closely approaches the global optimum (with full network state information) and nicely scales to large networks.",
"title": ""
},
{
"docid": "894eac11da60a5d81c437b3953d16408",
"text": "ion Levels 3 Behavior (Function) Structure (Netlist) Physical (Layout) Logic Circuit Processor System",
"title": ""
},
{
"docid": "5a573ae9fad163c6dfe225f59b246b7f",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "a0fc4982c5d63191ab1b15deff4e65d6",
"text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "47dcffdb6d8543034784bebabf3a17a9",
"text": "This research tends to explore relationship between brand equity as a whole construct comprising (brand association & brand awareness, perceived service quality and service loyalty) with purchase intention. Questionnaire has been designed from previous research settings and modified according to Pakistani context in order to ensure validity and reliability of the developed instrument. Convenience sampling comprising a sample size of 150 (non-student) has been taken in this research. Research type is causal correlational and cross sectional in nature. In order to accept or reject hypothesis correlation and regression techniques were applied. Results indicated significant and positive relationship between brand equity and purchase intention, while partial mediation has been proved for brand performance. Only three dimensions of brand equity (perceived service quality, brand association & awareness and service loyalty) have been measured. Other dimensions as brand personality have been ignored. English not being the primary language may have hampered the response rate. As far as the practical implications are concerned practitioners can get benefits from this research as the contribution of brand equity has more than 50% towards purchase intention.",
"title": ""
},
{
"docid": "62309d3434c39ea5f9f901f8eb635539",
"text": "The flap design according Karaca et al., used during surgery for removal of impacted third molars prevents complications related to 2 molar periodontal status [125]. Suarez et al. believe that this design influences healing primary [122]. This prevents wound dehiscence and evaluated the suture technique to achieve this closure to Sanchis et al. [124], believe that primary closure avoids draining the socket and worse postoperative inflammation and pain, choose to place drains, obtaining a less postoperative painful [127].",
"title": ""
},
{
"docid": "2f5d428b8da4d5b5009729fc1794e53d",
"text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image",
"title": ""
},
{
"docid": "59616ff3673ecfab0ff6e8224bb87f9c",
"text": "The tremendous growth in wireless Internet use is showing no signs of slowing down. Existing cellular networks are starting to be insufficient in meeting this demand, in part due to their inflexible and expensive equipment as well as complex and non-agile control plane. Software-defined networking is emerging as a natural solution for next generation cellular networks as it enables further network function virtualization opportunities and network programmability. In this article, we advocate an all-SDN network architecture with hierarchical network control capabilities to allow for different grades of performance and complexity in offering core network services and provide service differentiation for 5G systems. As a showcase of this architecture, we introduce a unified approach to mobility, handoff, and routing management and offer connectivity management as a service (CMaaS). CMaaS is offered to application developers and over-the-top service providers to provide a range of options in protecting their flows against subscriber mobility at different price levels.",
"title": ""
},
{
"docid": "cf02044b2f0c02fff666282a6e1bf68e",
"text": "A rapid method for the measurement of serum and/or plasma, lipid-associated sialic acid levels has been developed. This test has been applied to 850 human sera of which 670 came from patients with nine categories of malignant disease, 80 from persons with benign disorders, and 100 from normal individuals. Lipid-associated sialic acid concentrations were found to be significantly increased (p less than 0.001) in all groups of cancer patients as compared to both those with benign diseases and normal controls. Test sensitivity in the detection of cancer ranged from 77 to 97%. Specificity was, respectively, 81 and 93% for the benign and normal groups. In small samples of patients, no association between test values and tumor burden was found. This test compares favorably with the most widely used tumor marker test, that for carcinoembryonic antigen.",
"title": ""
},
{
"docid": "a671c6eff981b5e3a0466e53f22c4521",
"text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.",
"title": ""
},
{
"docid": "07c817e8c2e2d195d56621d7031850ac",
"text": "Traditionally, a full-mouth rehabilitation based on full-crown coverage has been recommended treatment for patients affected by severe dental erosion. Nowadays, thanks to improved adhesive techniques, the indications for crowns have decreased and a more conservative approach may be proposed. Even though adhesive treatments simplify both the clinical and laboratory procedures, restoring such patients still remains a challenge due to the great amount of tooth destruction. To facilitate the clinician's task during the planning and execution of a full-mouth adhesive rehabilitation, an innovative concept has been developed: the three-step technique. Three laboratory steps are alternated with three clinical steps, allowing the clinician and the laboratory technician to constantly interact to achieve the most predictable esthetic and functional outcome. During the first step, an esthetic evaluation is performed to establish the position of the plane of occlusion. In the second step, the patient's posterior quadrants are restored at an increased vertical dimension. Finally, the third step reestablishes the anterior guidance. Using the three-step technique, the clinician can transform a full-mouth rehabilitation into a rehabilitation for individual quadrants. The present article focuses on the second step, explaining all the laboratory and clinical steps necessary to restore the posterior quadrants with a defined occlusal scheme at an increased vertical dimension. A brief summary of the first step is also included.",
"title": ""
},
{
"docid": "0a05cfa04d520fcf1db6c4aafb9b65b6",
"text": "Motor learning can be defined as changing performance so as to optimize some function of the task, such as accuracy. The measure of accuracy that is optimized is called a loss function and specifies how the CNS rates the relative success or cost of a particular movement outcome. Models of pointing in sensorimotor control and learning usually assume a quadratic loss function in which the mean squared error is minimized. Here we develop a technique for measuring the loss associated with errors. Subjects were required to perform a task while we experimentally controlled the skewness of the distribution of errors they experienced. Based on the change in the subjects' average performance, we infer the loss function. We show that people use a loss function in which the cost increases approximately quadratically with error for small errors and significantly less than quadratically for large errors. The system is thus robust to outliers. This suggests that models of sensorimotor control and learning that have assumed minimizing squared error are a good approximation but tend to penalize large errors excessively.",
"title": ""
},
{
"docid": "efd2843175ad0b860ad1607f337addc5",
"text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.",
"title": ""
},
{
"docid": "32f49e1ec3ac3cdd435111e7cfa146bd",
"text": "Semantic lexicons such as WordNet and PPDB have been used to improve the vector-based semantic representations of words by adjusting the word vectors. However, such lexicons lack semantic intensity information, inhibiting adjustment of vector spaces to better represent semantic intensity scales. In this work, we adjust word vectors using the semantic intensity information in addition to synonyms and antonyms from WordNet and PPDB, and show improved performance on judging semantic intensity orders of adjective pairs on three different human annotated datasets.",
"title": ""
},
{
"docid": "3e0a731c76324ad0cea438a1d9907b68",
"text": "ance. In addition, the salt composition of the soil water influences the composition of cations on the exchange Due in large measure to the prodigious research efforts of Rhoades complex of soil particles, which influences soil permeand his colleagues at the George E. Brown, Jr., Salinity Laboratory ability and tilth, depending on salinity level and exover the past two decades, soil electrical conductivity (EC), measured changeable cation composition. Aside from decreasing using electrical resistivity and electromagnetic induction (EM), is among the most useful and easily obtained spatial properties of soil crop yield and impacting soil hydraulics, salinity can that influences crop productivity. As a result, soil EC has become detrimentally impact ground water, and in areas where one of the most frequently used measurements to characterize field tile drainage occurs, drainage water can become a disvariability for application to precision agriculture. The value of spatial posal problem as demonstrated in the southern San measurements of soil EC to precision agriculture is widely acknowlJoaquin Valley of central California. edged, but soil EC is still often misunderstood and misinterpreted. From a global perspective, irrigated agriculture makes To help clarify misconceptions, a general overview of the application an essential contribution to the food needs of the world. of soil EC to precision agriculture is presented. The following areas While only 15% of the world’s farmland is irrigated, are discussed with particular emphasis on spatial EC measurements: roughly 35 to 40% of the total supply of food and fiber a brief history of the measurement of soil salinity with EC, the basic comes from irrigated agriculture (Rhoades and Lovetheories and principles of the soil EC measurement and what it actually day, 1990). However, vast areas of irrigated land are measures, an overview of the measurement of soil salinity with various threatened by salinization. Although accurate worldEC measurement techniques and equipment (specifically, electrical wide data are not available, it is estimated that roughly resistivity with the Wenner array and EM), examples of spatial EC half of all existing irrigation systems (totaling about 250 surveys and their interpretation, applications and value of spatial measurements of soil EC to precision agriculture, and current and million ha) are affected by salinity and waterlogging future developments. Precision agriculture is an outgrowth of techno(Rhoades and Loveday, 1990). logical developments, such as the soil EC measurement, which faciliSalinity within irrigated soils clearly limits productivtate a spatial understanding of soil–water–plant relationships. The ity in vast areas of the USA and other parts of the world. future of precision agriculture rests on the reliability, reproducibility, It is generally accepted that the extent of salt-affected and understanding of these technologies. soil is increasing. In spite of the fact that salinity buildup on irrigated lands is responsible for the declining resource base for agriculture, we do not know the exact T predominant mechanism causing the salt accuextent to which soils in our country are salinized, the mulation in irrigated agricultural soils is evapotransdegree to which productivity is being reduced by salinpiration. The salt contained in the irrigation water is ity, the increasing or decreasing trend in soil salinity left behind in the soil as the pure water passes back to development, and the location of contributory sources the atmosphere through the processes of evaporation of salt loading to ground and drainage waters. Suitable and plant transpiration. The effects of salinity are manisoil inventories do not exist and until recently, neither fested in loss of stand, reduced rates of plant growth, did practical techniques to monitor salinity or assess the reduced yields, and in severe cases, total crop failure (Rhoades and Loveday, 1990). Salinity limits water upAbbreviations: EC, electrical conductivity; ECa, apparent soil electritake by plants by reducing the osmotic potential and cal conductivity; ECe, electrical conductivity of the saturated soil paste thus the total soil water potential. Salinity may also extract; ECw, electrical conductivity of soil water; EM, electromagnetic cause specific ion toxicity or upset the nutritional balinduction; EMavg, the geometric mean of the vertical and horizontal electromagnetic induction readings; EMh, electromagnetic induction measurement in the horizontal coil-mode configuration; EMv, electroUSDA-ARS, George E. Brown, Jr., Salinity Lab., 450 West Big magnetic induction measurement in the vertical coil-mode configuraSprings Rd., Riverside, CA 92507-4617. Received 23 Apr. 2001. *Cortion; GIS, geographical information system; GPS, global positioning responding author (dcorwin@ussl.ars.usda.gov). systems; NPS, nonpoint source; SP, saturation percentage; TDR, time domain reflectometry; w, total volumetric water content. Published in Agron. J. 95:455–471 (2003).",
"title": ""
},
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
}
] |
scidocsrr
|
a510d90536db787fcd5133959a390a74
|
Text Summarization within the Latent Semantic Analysis Framework: Comparative Study
|
[
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
}
] |
[
{
"docid": "d52efc862c68ec09a5ae3395464996ed",
"text": "The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (Digital Video Quality). Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.",
"title": ""
},
{
"docid": "c0ec2818c7f34359b089acc1df5478c6",
"text": "Methods We searched Medline from Jan 1, 2009, to Nov 19, 2013, limiting searches to phase 3, randomised trials of patients with atrial fi brillation who were randomised to receive new oral anticoagulants or warfarin, and trials in which both effi cacy and safety outcomes were reported. We did a prespecifi ed meta-analysis of all 71 683 participants included in the RE-LY, ROCKET AF, ARISTOTLE, and ENGAGE AF–TIMI 48 trials. The main outcomes were stroke and systemic embolic events, ischaemic stroke, haemorrhagic stroke, all-cause mortality, myocardial infarction, major bleeding, intracranial haemorrhage, and gastrointestinal bleeding. We calculated relative risks (RRs) and 95% CIs for each outcome. We did subgroup analyses to assess whether diff erences in patient and trial characteristics aff ected outcomes. We used a random-eff ects model to compare pooled outcomes and tested for heterogeneity.",
"title": ""
},
{
"docid": "7cc94fa6dbad97f11b2da591936a73ee",
"text": "\n Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.\n",
"title": ""
},
{
"docid": "e912abc2da4eb1158c6a6c84245d13f8",
"text": "Social media hype has created a lot of speculation among educators on how these media can be used to support learning, but there have been rather few studies so far. Our explorative interview study contributes by critically exploring how campus students perceive using social media to support their studies and the perceived benefits and limitations compared with other means. Although the vast majority of the respondents use social media frequently, a “digital dissonance” can be noted, because few of them feel that they use such media to support their studies. The interviewees mainly put forth e-mail and instant messaging, which are used among students to ask questions, coordinate group work and share files. Some of them mention using Wikipedia and YouTube for retrieving content and Facebook to initiate contact with course peers. Students regard social media as one of three key means of the educational experience, alongside face-to-face meetings and using the learning management systems, and are mainly used for brief questions and answers, and to coordinate group work. In conclusion, we argue that teaching strategy plays a key role in supporting students in moving from using social media to support coordination and information retrieval to also using such media for collaborative learning, when appropriate.",
"title": ""
},
{
"docid": "c6b32d5182842b1bd933de186a47d326",
"text": "Grouping of strokes into semantically meaningful diagram elements is a difficult problem. Yet such grouping is needed if truly natural sketching is to be supported in intelligent sketch tools. Using a machine learning approach, we propose a number of new paired-stroke features for grouping and evaluate the suitability of a range of algorithms. Our evaluation shows the new features and algorithms produce promising results that are statistically better than the existing machine learning grouper.",
"title": ""
},
{
"docid": "a06c9d681bb8a8b89a8ee64a53e3b344",
"text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.",
"title": ""
},
{
"docid": "ba4637dd5033fa39d1cb09edb42481ec",
"text": "In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program. In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(ƒ). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "3dd36e800bc9135c59f04dfa1d1e5f42",
"text": "A gamma radiation-resistant, Gram reaction-positive, aerobic and chemoorganotrophic actinobacterium, initially designated Geodermatophilus obscurus subsp. dictyosporus G-5T, was not validly named at the time of initial publication (1968). G-5T formed black-colored colonies on GYM agar. The optimal growth range was 25–35 °C, at pH 6.5–9.5 and in the absence of NaCl. Chemotaxonomic and molecular characteristics of the isolate matched those described for members of the genus Geodermatophilus. The DNA G + C content of the strain was 75.3 mol %. The peptidoglycan contained meso-diaminopimelic acid as diagnostic diamino acid. The main polar lipids were phosphatidylcholine, diphosphatidylglycerol, phosphatidylinositol, phosphatidylethanolamine and one unspecified glycolipid; MK-9(H4) was the dominant menaquinone and galactose was detected as a diagnostic sugar. The major cellular fatty acids were branched-chain saturated acids, iso-C16:0 and iso-C15:0. The 16S rRNA gene showed 94.8–98.4 % sequence identity with the members of the genus Geodermatophilus. Based on phenotypic results and 16S rRNA gene sequence analysis, strain G-5T is proposed to represent a novel species, Geodermatophilus dictyosporus and the type strain is G-5T (=DSM 43161T = CCUG 62970T = MTCC 11558T = ATCC 25080T = CBS 234.69T = IFO 13317T = KCC A-0154T = NBRC 13317T). The INSDC accession number is HF970584.",
"title": ""
},
{
"docid": "81fa6a7931b8d5f15d55316a6ed1d854",
"text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.",
"title": ""
},
{
"docid": "ed0f4616a36a2dffb6120bccd7539d0c",
"text": "Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to \"model-free\" and \"model-based\" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.",
"title": ""
},
{
"docid": "51db8011d3dfd60b7808abc6868f7354",
"text": "Security issue in cloud environment is one of the major obstacle in cloud implementation. Network attacks make use of the vulnerability in the network and the protocol to damage the data and application. Cloud follows distributed technology; hence it is vulnerable for intrusions by malicious entities. Intrusion detection systems (IDS) has become a basic component in network protection infrastructure and a necessary method to defend systems from various attacks. Distributed denial of service (DDoS) attacks are a great problem for a user of computers linked to the Internet. Data mining techniques are widely used in IDS to identify attacks using the network traffic. This paper presents and evaluates a Radial basis function neural network (RBF-NN) detector to identify DDoS attacks. Many of the training algorithms for RBF-NNs start with a predetermined structure of the network that is selected either by means of a priori knowledge or depending on prior experience. The resultant network is frequently inadequate or needlessly intricate and a suitable network structure could be configured only by trial and error method. This paper proposes Bat algorithm (BA) to configure RBF-NN automatically. Simulation results demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "99e1ae882a1b74ffcbe5e021eb577e49",
"text": "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications.",
"title": ""
},
{
"docid": "0738367dec2b7f1c5687ce1a15c8ac28",
"text": "There is a high demand for qualified information and communication technology (ICT) practitioners in the European labour market, but the problem at many universities is a high dropout rate among ICT students, especially during the first study year. The solution might be to focus more on improving students’ computational thinking (CT) before starting university studies. Therefore, research is needed to find the best methods for learning CT already at comprehensive school level to raise the interest in and awareness of studying computer science. Doing so requires a clear understanding of CT and a model to improve it at comprehensive schools. Through the analysis of the articles found in EBSCO Discovery Search tool, this study gives an overview of the definition of CT and presents three models of CT. The models are analysed to find out their similarities and differences in order to gather together the core elements of CT and form a revised model of learning CT in comprehensive school ICT lessons or integrating CT in other subjects.",
"title": ""
},
{
"docid": "f2521fbfd566fcf31b5810695e748ba0",
"text": "A facile approach for coating red fluoride phosphors with a moisture-resistant alkyl phosphate layer with a thickness of 50-100 nm is reported. K2 SiF6 :Mn(4+) particles were prepared by co-precipitation and then coated by esterification of P2 O5 with alcohols (methanol, ethanol, and isopropanol). This route was adopted to encapsulate the prepared phosphors using transition-metal ions as cross-linkers between the alkyl phosphate moieties. The coated phosphor particles exhibited a high water tolerance and retained approximately 87 % of their initial external quantum efficiency after aging under high-humidity (85 %) and high-temperature (85 °C) conditions for one month. Warm white-light-emitting diodes that consisted of blue InGaN chips, the prepared K2 SiF6 :Mn(4+) phosphors, and either yellow Y3 Al5 O12 :Ce(3+) phosphors or green β-SiAlON: Eu(2+) phosphors showed excellent color rendition.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "90faa9a8dc3fd87614a61bfbdf24cab6",
"text": "The methods proposed recently for specializing word embeddings according to a particular perspective generally rely on external knowledge. In this article, we propose Pseudofit, a new method for specializing word embeddings according to semantic similarity without any external knowledge. Pseudofit exploits the notion of pseudo-sense for building several representations for each word and uses these representations for making the initial embeddings more generic. We illustrate the interest of Pseudofit for acquiring synonyms and study several variants of Pseudofit according to this perspective.",
"title": ""
},
{
"docid": "24b5c8aee05ac9be61d9217a49e3d3b0",
"text": "People have different intents in using online platforms. They may be trying to accomplish specific, short-term goals, or less well-defined, longer-term goals. While understanding user intent is fundamental to the design and personalization of online platforms, little is known about how intent varies across individuals, or how it relates to their behavior. Here, we develop a framework for understanding intent in terms of goal specificity and temporal range. Our methodology combines survey-based methodology with an observational analysis of user activity. Applying this framework to Pinterest, we surveyed nearly 6000 users to quantify their intent, and then studied their subsequent behavior on the web site. We find that goal specificity is bimodal – users tend to be either strongly goal-specific or goalnonspecific. Goal-specific users search more and consume less content in greater detail than goal-nonspecific users: they spend more time using Pinterest, but are less likely to return in the near future. Users with short-term goals are also more focused and more likely to refer to past saved content than users with long-term goals, but less likely to save content for the future. Further, intent can vary by demographic, and with the topic of interest. Last, we show that user’s intent and activity are intimately related by building a model that can predict a user’s intent for using Pinterest after observing their activity for only two minutes. Altogether, this work shows how intent can be predicted from user behavior.",
"title": ""
},
{
"docid": "b5c15cbfdf35aabe7d8f2f237ecd4de6",
"text": "In this correspondence, we investigate the physical layer security for cooperative nonorthogonal multiple access (NOMA) systems, where both amplify-and-forward (AF) and decode-and-forward (DF) protocols are considered. More specifically, some analytical expressions are derived for secrecy outage probability (SOP) and strictly positive secrecy capacity. Results show that AF and DF almost achieve the same secrecy performance. Moreover, asymptotic results demonstrate that the SOP tends to a constant at high signal-to-noise ratio. Finally, our results show that the secrecy performance of considered NOMA systems is independent of the channel conditions between the relay and the poor user.",
"title": ""
},
{
"docid": "c6c9643816533237a29dd93fd420018f",
"text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context",
"title": ""
}
] |
scidocsrr
|
e2323ba98c3bcdcf7b8d3e403af205da
|
What makes great teaching ? Review of the underpinning research
|
[
{
"docid": "983ec9cdd75d0860c96f89f3c9b2f752",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "698dca642840f47081b1e9a54775c5cc",
"text": "Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain. Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, leftand right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny. Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate leftand right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits. Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.",
"title": ""
},
{
"docid": "6c4a7a6d21c85f3f2f392fbb1621cc51",
"text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …",
"title": ""
}
] |
[
{
"docid": "1a0d2b5a7421bcca3ee44885b2940a19",
"text": "Genome editing is a powerful technique for genome modification in molecular research and crop breeding, and has the great advantage of imparting novel desired traits to genetic resources. However, the genome editing of fruit tree plantlets remains to be established. In this study, we describe induction of a targeted gene mutation in the endogenous apple phytoene desaturase (PDS) gene using the CRISPR/Cas9 system. Four guide RNAs (gRNAs) were designed and stably transformed with Cas9 separately in apple. Clear and partial albino phenotypes were observed in 31.8% of regenerated plantlets for one gRNA, and bi-allelic mutations in apple PDS were confirmed by DNA sequencing. In addition, an 18-bp gRNA also induced a targeted mutation. These CRIPSR/Cas9 induced-mutations in the apple genome suggest activation of the NHEJ pathway, but with some involvement also of the HR pathway. Our results demonstrate that genome editing can be practically applied to modify the apple genome.",
"title": ""
},
{
"docid": "306a833c0130678e1b2ece7e8b824d5e",
"text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.",
"title": ""
},
{
"docid": "33285813f1b3f2c13c711447199ed75d",
"text": "This paper describes the dotplot data visualization technique and its potential for contributingto the identificationof design patterns. Pattern languages have been used in architectural design and urban planning to codify related rules-of-thumb for constructing vernacular buildings and towns. When applied to software design, pattern languages promote reuse while allowing novice designers to learn from the insights of experts. Dotplots have been used in biology to study similarity in genetic sequences. When applied to software, dotplots identify patterns that range in abstraction from the syntax of programming languages to the organizational uniformity of large, multi-component systems. Dotplots are useful for design by successive abstraction—replacing duplicated code with macros, subroutines, or classes. Dotplots reveal a pervasive design pattern for simplifying algorithms by increasing the complexity of initializations. Dotplots also reveal patterns of wordiness in languages—one example inspired a design pattern for a new programming language. In addition, dotplots of data associated with programs identify dynamic usage patterns—one example identifies a design pattern used in the construction of a UNIX(tm) file system.",
"title": ""
},
{
"docid": "064c86deca4955f09e12d9b9d0afc4e8",
"text": "This paper presents a new classification framework for brain-computer interface (BCI) based on motor imagery. This framework involves the concept of Riemannian geometry in the manifold of covariance matrices. The main idea is to use spatial covariance matrices as EEG signal descriptors and to rely on Riemannian geometry to directly classify these matrices using the topology of the manifold of symmetric and positive definite (SPD) matrices. This framework allows to extract the spatial information contained in EEG signals without using spatial filtering. Two methods are proposed and compared with a reference method [multiclass Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA)] on the multiclass dataset IIa from the BCI Competition IV. The first method, named minimum distance to Riemannian mean (MDRM), is an implementation of the minimum distance to mean (MDM) classification algorithm using Riemannian distance and Riemannian mean. This simple method shows comparable results with the reference method. The second method, named tangent space LDA (TSLDA), maps the covariance matrices onto the Riemannian tangent space where matrices can be vectorized and treated as Euclidean objects. Then, a variable selection procedure is applied in order to decrease dimensionality and a classification by LDA is performed. This latter method outperforms the reference method increasing the mean classification accuracy from 65.1% to 70.2%.",
"title": ""
},
{
"docid": "8bde43670fd9c68bbb359531938f9b55",
"text": "An 8b 1GS/s ADC is presented that interleaves two 2b/cycle SARs. To enhance speed and save power, the prototype utilizes segmentation switching and custom-designed DAC array with high density in a low parasitic layout structure. It operates at 1GS/s from 1V supply without interleaving calibration and consumes 3.8mW of power, exhibiting a FoM of 24fJ/conversion step. The ADC occupies an active area of 0.013mm2 in 65nm CMOS including on-chip offset calibration.",
"title": ""
},
{
"docid": "d1ba8ad56a6227f771f9cef8139e9f15",
"text": "We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik’s wheel of emotions model. We introduce RBEM-Emo as an extension to the Rule-Based Emission Model algorithm to deduce such emotions from human-written messages. We evaluate our approach on two different datasets and compare its performance with the current state-of-the-art techniques for emotion detection, including a recursive autoencoder. The results of the experimental study suggest that RBEM-Emo is a promising approach advancing the current state-of-the-art in emotion detection.",
"title": ""
},
{
"docid": "3e9f98a1aa56e626e47a93b7973f999a",
"text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.",
"title": ""
},
{
"docid": "fb588b5df4e8167153f3f45be5cf4b6c",
"text": "This paper is a study of consumer resistance among active abstainers of the Facebook social network site. I analyze the discourses invoked by individuals who consciously choose to abstain from participation on the ubiquitous Facebook platform. This discourse analysis draws from approximately 100 web and print publications from 2006 to early 2012, as well as personal interviews conducted with 20 Facebook abstainers. I conceptualize Facebook abstention as a performative mode of resistance, which must be understood within the context of a neoliberal consumer culture, in which subjects are empowered to act through consumption choices – or in this case non-consumption choices – and through the public display of those choices. I argue that such public displays are always at risk of misinterpretation due to the dominant discursive frameworks through which abstention is given meaning. This paper gives particular attention to the ways in which connotations of taste and distinction are invoked by refusers through their conspicuous displays of non-consumption. This has the effect of framing refusal as a performance of elitism, which may work against observers interpreting conscientious refusal as a persuasive and emulable practice of critique. The implication of this is that refusal is a limited tactic of political engagement where media platforms are concerned.",
"title": ""
},
{
"docid": "3a1d66cdc06338857fc685a2bdc8b068",
"text": "UNLABELLED\nThe WARM study is a longitudinal cohort study following infants of mothers with schizophrenia, bipolar disorder, depression and control from pregnancy to infant 1 year of age.\n\n\nBACKGROUND\nChildren of parents diagnosed with complex mental health problems including schizophrenia, bipolar disorder and depression, are at increased risk of developing mental health problems compared to the general population. Little is known regarding the early developmental trajectories of infants who are at ultra-high risk and in particular the balance of risk and protective factors expressed in the quality of early caregiver-interaction.\n\n\nMETHODS/DESIGN\nWe are establishing a cohort of pregnant women with a lifetime diagnosis of schizophrenia, bipolar disorder, major depressive disorder and a non-psychiatric control group. Factors in the parents, the infant and the social environment will be evaluated at 1, 4, 16 and 52 weeks in terms of evolution of very early indicators of developmental risk and resilience focusing on three possible environmental transmission mechanisms: stress, maternal caregiver representation, and caregiver-infant interaction.\n\n\nDISCUSSION\nThe study will provide data on very early risk developmental status and associated psychosocial risk factors, which will be important for developing targeted preventive interventions for infants of parents with severe mental disorder.\n\n\nTRIAL REGISTRATION\nNCT02306551, date of registration November 12, 2014.",
"title": ""
},
{
"docid": "fd48614d255b7c7bc7054b4d5de69a15",
"text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009",
"title": ""
},
{
"docid": "8c729366391133065c3d7a9b2b22fe23",
"text": "Mobile devices generate massive amounts of data that is used to get an insight into the user behavior by enterprise systems. Data privacy is a concern in such systems as users have little control over the data that is generated by them. Blockchain systems offer ways to ensure privacy and security of the user data with the implementation of an access control mechanism. In this demonstration, we present ChainMOB, a mobility analytics application that is built on top of blockchain and addresses the fundamental privacy and security concerns in enterprise systems. Further, the extent of data sharing along with the intended audience is also controlled by the user. Another exciting feature is that user is part of the business model and is incentivized for sharing the personal mobility data. The system also supports queries that can be used in a variety of application domains.",
"title": ""
},
{
"docid": "455e3f0c6f755d78ecafcdff14c46014",
"text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.",
"title": ""
},
{
"docid": "5959decfd357faa3ea76fe72e6197344",
"text": "Deep Learning architectures, such as deep neural networks, are currently the hottest emerging areas of data science, especially in Big Data. Deep Learning could be effectively exploited to address some major issues of Big Data, including withdrawing complex patterns from huge volumes of data, fast information retrieval, data classification, semantic indexing and so on. In this work, we designed and implemented a framework to train deep neural networks using Spark, fast and general data flow engine for large scale data processing. The design is similar to Google software framework called DistBelief which can utilize computing clusters with thousands of machines to train large scale deep networks. Training Deep Learning models requires extensive data and computation. Our proposed framework can accelerate the training time by distributing the model replicas, via stochastic gradient descent, among cluster nodes for data resided on HDFS.",
"title": ""
},
{
"docid": "b4fd6a1a3424e983928be16e76262913",
"text": "In this paper, a common grounded Z-source dc-dc converter with high voltage gain is proposed for photovoltaic (PV) applications, which require a relatively high output-input voltage conversion ratio. Compared with the traditional Z-source dc-dc converter, the proposed converter, which employs a conventional Z-source network, can obtain higher voltage gain and provide the common ground for the input and output without any additional components, which results in low cost and small size. Moreover, the proposed converter features low voltage stresses of the switch and diodes. Therefore, the efficiency and reliability of the proposed converter can be improved. The operating principle, parameters design, and comparison with other converters are analyzed. Simulation and experimental results are given to verify the aforementioned characteristics and theoretical analysis of the proposed converter.",
"title": ""
},
{
"docid": "8cf02bf19145df237e77273e70babc1d",
"text": "Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis.",
"title": ""
},
{
"docid": "476bd671b982450d6d1f6c8d7936bcb5",
"text": "Walter Thiel developed the method that enables preservation of the body with natural colors in 1992. It consists in the application of an intravascular injection formula, and maintaining the corps submerged for a determinate period of time in the immersion solution in the pool. After immersion, it is possible to maintain the corps in a hermetically sealed container, thus avoiding dehydration outside the pool. The aim of this work was to review the Thiel method, searching all scientific articles describing this technique from its development point of view, and application in anatomy and morphology teaching, as well as in clinical and su rgic l practice. Most of these studies were carried out in Europe. We used PubMed, Ebsco and Embase databases with the terms “Thiel cadaver”, “Thiel embalming”, “Thiel embalming method” and we searched for papers that cited Thiel`s work. In comparison with methods commonly used with high concentrations of formaldehyde, this method lacks the emanation of noxious or irritating gases; gives the corps important passive joint mobility without stiffness; maintaining color, flexibility and tissue plasticity at a level e quivalent to that of a living body. Furthermore, it allows vascular repletion at the capillary level. All this makes for great advantage over the f rmalinfixed and fresh material. Its multiple uses are applicable in anatomy teaching and research; teaching for undergraduates (prose ction and dissection) and for training in surgical techniques for graduates and specialists (laparoscopies, arthroscopies, endoscopies).",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "c26f06abb768c7b6d1a22172078aaf00",
"text": "In complex conversation tasks, people react to their interlocutor’s state, such as uncertainty and engagement to improve conversation effectiveness [2]. If a conversational system reacts to a user’s state, would that lead to a better conversation experience? To test this hypothesis, we designed and implemented a dialog system that tracks and reacts to a user’s state, such as engagement, in real time. We designed and implemented a conversational job interview task based on the proposed framework. The system acts as an interviewer and reacts to user’s disengagement in real-time with positive feedback strategies designed to re-engage the user in the job interview process. Experiments suggest that users speak more while interacting with the engagement-coordinated version of the system as compared to a noncoordinated version. Users also reported the former system as being more engaging and providing a better user experience.",
"title": ""
},
{
"docid": "fb05091e8badfc8e60f69441da1eb60d",
"text": "Learning-based methods have demonstrated clear advantages in controlling robot tasks, such as the information fusion abilities, strong robustness, and high accuracy. Meanwhile, the on-board systems of robots have limited computation and energy resources, which are contradictory with state-of-the-art learning approaches. They are either too lightweight to solve complex problems or too heavyweight to be used for mobile applications. On the other hand, training spiking neural networks (SNNs) with biological plausibility has great potentials of performing fast computation and energy efficiency. However, the lack of effective learning rules for SNNs impedes their wide usage in mobile robot applications. This paper addresses the problem by introducing an end to end learning approach of spiking neural networks for a lane keeping vehicle. We consider the reward-modulated spike-timing-dependent-plasticity (R-STDP) as a promising solution in training SNNs, since it combines the advantages of both reinforcement learning and the well-known STDP. We test our approach in three scenarios that a Pioneer robot is controlled to keep lanes based on an SNN. Specifically, the lane information is encoded by the event data from a neuromorphic vision sensor. The SNN is constructed using R-STDP synapses in an all-to-all fashion. We demonstrate the advantages of our approach in terms of the lateral localization accuracy by comparing with other state-of-the-art learning algorithms based on SNNs.",
"title": ""
}
] |
scidocsrr
|
1737341bfdc3a0973a3443b95f779552
|
Observation-Level Interaction with Clustering and Dimension Reduction Algorithms
|
[
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "cff44da2e1038c8e5707cdde37bc5461",
"text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.",
"title": ""
},
{
"docid": "0ee744ad3c75f7bb9695c47165d87043",
"text": "Clustering is a critical component of many data analysis tasks, but is exceedingly difficult to fully automate. To better incorporate domain knowledge, researchers in machine learning, human-computer interaction, visualization, and statistics have independently introduced various computational tools to engage users through interactive clustering. In this work-in-progress paper, we present a cross-disciplinary literature survey, and find that existing techniques often do not meet the needs of real-world data analysis. Semi-supervised machine learning algorithms often impose prohibitive user interaction costs or fail to account for external analysis requirements. Human-centered approaches and user interface designs often fall short because of their insufficient statistical modeling capabilities. Drawing on effective approaches from each field, we identify five characteristics necessary to support effective human-in-the-loop interactive clustering: iterative, multi-objective, local updates that can operate on any initial clustering and a dynamic set of features. We outline key aspects of our technique currently under development, and share our initial evidence suggesting that all five design considerations can be incorporated into a single algorithm. We plan to demonstrate our technique on three data analysis tasks: feature engineering for classification, exploratory analysis of biomedical data, and multi-document summarization.",
"title": ""
}
] |
[
{
"docid": "948ac7d5527cfcb978087f1465a918e6",
"text": "We investigate automatic analysis of teachers' instructional strategies from audio recordings collected in live classrooms. We collected a data set of teacher audio and human-coded instructional activities (e.g., lecture, question and answer, group work) in 76 middle school literature, language arts, and civics classes from eleven teachers across six schools. We automatically segment teacher audio to analyze speech vs. rest patterns, generate automatic transcripts of the teachers' speech to extract natural language features, and compute low-level acoustic features. We train supervised machine learning models to identify occurrences of five key instructional segments (Question & Answer, Procedures and Directions, Supervised Seatwork, Small Group Work, and Lecture) that collectively comprise 76% of the data. Models are validated independently of teacher in order to increase generalizability to new teachers from the same sample. We were able to identify the five instructional segments above chance levels with F1 scores ranging from 0.64 to 0.78. We discuss key findings in the context of teacher modeling for formative assessment and professional development.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
},
{
"docid": "ddb70c486a7974f7ba1dc3e5ca623fc0",
"text": "Activity recognition from on-body sensors is affected by sensor degradation, interconnections failures, and jitter in sensor placement and orientation. We investigate how this may be balanced by exploiting redundant sensors distributed on the body. We recognize activities by a meta-classifier that fuses the information of simple classifiers operating on individual sensors. We investigate the robustness to faults and sensor scalability which follows from classifier fusion. We compare a reference majority voting and a naive Bayesian fusion scheme. We validate this approach by recognizing a set of 10 activities carried out by workers in the quality assurance checkpoint of a car assembly line. Results show that classification accuracy greatly increases with additional sensors (50% with 1 sensor, 80% and 98% with 3 and 57 sensors), and that sensor fusion implicitly allows to compensate for typical faults up to high fault rates. These results highlight the benefit of large on- body sensor network rather than a minimum set of sensors for activity recognition and prompts further investigation.",
"title": ""
},
{
"docid": "8a6e062d17ee175e00288dd875603a9c",
"text": "Code summarization, aiming to generate succinct natural language description of source code, is extremely useful for code search and code comprehension. It has played an important role in software maintenance and evolution. Previous approaches generate summaries by retrieving summaries from similar code snippets. However, these approaches heavily rely on whether similar code snippets can be retrieved, how similar the snippets are, and fail to capture the API knowledge in the source code, which carries vital information about the functionality of the source code. In this paper, we propose a novel approach, named TL-CodeSum, which successfully uses API knowledge learned in a different but related task to code summarization. Experiments on large-scale real-world industry Java projects indicate that our approach is effective and outperforms the state-of-the-art in code summarization.",
"title": ""
},
{
"docid": "edfcb2f1f2afcfd2656c2985898867df",
"text": "AJAX is a web development technique for building responsive web applications. The paper gives an overview of the AJAX technique and explores ideas for teaching this technique in modules related to Internet technologies and web development. Appropriate examples for use in lab sessions are also suggested.",
"title": ""
},
{
"docid": "2c56891c1c9f128553bab35d061049b8",
"text": "RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors' performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant.",
"title": ""
},
{
"docid": "878cd4545931099ead5df71076afc731",
"text": "The pioneer deep neural networks (DNNs) have emerged to be deeper or wider for improving their accuracy in various applications of artificial intelligence. However, DNNs are often too heavy to deploy in practice, and it is often required to control their architectures dynamically given computing resource budget, i.e., anytime prediction. While most existing approaches have focused on training multiple shallow sub-networks jointly, we study training thin sub-networks instead. To this end, we first build many inclusive thin sub-networks (of the same depth) under a minor modification of existing multi-branch DNNs, and found that they can significantly outperform the state-of-art dense architecture for anytime prediction. This is remarkable due to their simplicity and effectiveness, but training many thin subnetworks jointly faces a new challenge on training complexity. To address the issue, we also propose a novel DNN architecture by forcing a certain sparsity pattern on multi-branch network parameters, making them train efficiently for the purpose of anytime prediction. In our experiments on the ImageNet dataset, its sub-networks have up to 43.3% smaller sizes (FLOPs) compared to those of the state-of-art anytime model with respect to the same accuracy. Finally, we also propose an alternative task under the proposed architecture using a hierarchical taxonomy, which brings a new angle for anytime prediction.",
"title": ""
},
{
"docid": "1212637c91d8c57299c922b6bde91ce8",
"text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.",
"title": ""
},
{
"docid": "ae7405600f7cf3c7654cc2db73a22340",
"text": "The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.",
"title": ""
},
{
"docid": "aad3945a69f57049c052bcb222f1b772",
"text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.",
"title": ""
},
{
"docid": "1623cdb614ad63675d982e8396e4ff01",
"text": "Recognizing textual entailment is a fundamental task in a variety of text mining or natural language processing applications. This paper proposes a simple neural model for RTE problem. It first matches each word in the hypothesis with its most-similar word in the premise, producing an augmented representation of the hypothesis conditioned on the premise as a sequence of word pairs. The LSTM model is then used to model this augmented sequence, and the final output from the LSTM is fed into a softmax layer to make the prediction. Besides the base model, in order to enhance its performance, we also proposed three techniques: the integration of multiple word-embedding library, bi-way integration, and ensemble based on model averaging. Experimental results on the SNLI dataset have shown that the three techniques are effective in boosting the predicative accuracy and that our method outperforms several state-of-the-state ones.",
"title": ""
},
{
"docid": "64fbffe75209359b540617fac4930c44",
"text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.",
"title": ""
},
{
"docid": "b0b11a794a35bec71f88cc1ef8405dc4",
"text": "In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits correlation of multi-view data during training time, to improve prediction at test time. We extensively validate our results on synthetic and real data, demonstrating significant improvements in accuracy as compared to the state-of-the-art, and providing a practical system for detailed human body measurements from a single image.",
"title": ""
},
{
"docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe",
"text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.",
"title": ""
},
{
"docid": "cf015ef9181bf2fcf39eb41f7fa9196e",
"text": "Channel estimation is useful in millimeter wave (mmWave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics such as mutual information or signal-to-interference-noise (SINR) ratio. At mmWave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of prior work on channel estimation for hybrid architectures assumes a frequencyflat mmWave channel model. In this paper, we consider a frequency-selective mmWave channel and propose compressed-sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose trade-offs in complexity-overheadperformance as compared to those of previous approaches. This work was partially funded by the Agencia Estatal de Investigacin (Spain) and the European Regional Development Fund (ERDF) under project MYRADA (TEC2016-75103-C2-2-R), the U.S. Department of Transportation through the DataSupported Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center, by the Texas Department of Transportation under Project 0-6877 entitled Communications and Radar-Supported Transportation Operations and Planning (CAR-STOP) and by the National Science Foundation under Grant NSF-CCF-1319556 and NSF-CCF-1527079. ar X iv :1 70 4. 08 57 2v 1 [ cs .I T ] 2 7 A pr 2 01 7",
"title": ""
},
{
"docid": "f9be959b4c2392f7fc1dff2a1bde4dae",
"text": "This paper presents a new Web-based system, Mooshak, to handle programming contests. The system acts as a full contest manager as well as an automatic judge for programming contests. Mooshak innovates in a number of aspects: it has a scalable architecture that can be used from small single server contests to complex multi-site contests with simultaneous public online contests and redundancy; it has a robust data management system favoring simple procedures for storing, replicating, backing up data and failure recovery using persistent objects; it has automatic judging capabilities to assist human judges in the evaluation of programs; it has built-in safety measures to prevent users from interfering with the normal progress of contests. Mooshak is an open system implemented on the Linux operating system using the Apache HTTP server and the Tcl scripting language. This paper starts by describing the main features of the system and its architecture with reference to the automated judging, data management based on the replication of persistent objects over a network. Finally, we describe our experience using this system for managing two official programming contests. Copyright c © 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
},
{
"docid": "7adffc2dd1d6412b4bb01b38ced51c24",
"text": "With the popularity of the Internet and mobile intelligent terminals, the number of mobile applications is exploding. Mobile intelligent terminals trend to be the mainstream way of people's work and daily life online in place of PC terminals. Mobile application system brings some security problems inevitably while it provides convenience for people, and becomes a main target of hackers. Therefore, it is imminent to strengthen the security detection of mobile applications. This paper divides mobile application security detection into client security detection and server security detection. We propose a combining static and dynamic security detection method to detect client-side. We provide a method to get network information of server by capturing and analyzing mobile application traffic, and propose a fuzzy testing method based on HTTP protocol to detect server-side security vulnerabilities. Finally, on the basis of this, an automated platform for security detection of mobile application system is developed. Experiments show that the platform can detect the vulnerabilities of mobile application client and server effectively, and realize the automation of mobile application security detection. It can also reduce the cost of mobile security detection and enhance the security of mobile applications.",
"title": ""
},
{
"docid": "26d5237c912977223e0ba45c0f949e3d",
"text": "Generally speaking, ‘Education’ is utilized in three senses: Knowledge, Subject and a Process. When a person achieves degree up to certain level we do not call it education .As for example if a person has secured Masters degree then we utilize education it a very narrower sense and call that the person has achieved education up to Masters Level. In the second sense, education is utilized in a sense of discipline. As for example if a person had taken education as a paper or as a discipline during his study in any institution then we utilize education as a subject. In the third sense, education is utilized as a process. In fact when we talk of education, we talk in the third sense i.e. education as a process. Thus, we talk what is education as a process? What are their importances etc.? The following debate on education will discuss education in this sense and we will talk education as a process.",
"title": ""
},
{
"docid": "7e75bbbf5e86edc396aaa9d9db02c509",
"text": "Background: In recent years, blockchain technology has attracted considerable attention. It records cryptographic transactions in a public ledger that is difficult to alter and compromise because of the distributed consensus. As a result, blockchain is believed to resist fraud and hacking. Results: This work explores the types of fraud and malicious activities that can be prevented by blockchain technology and identifies attacks to which blockchain remains vulnerable. Conclusions: This study recommends appropriate defensive measures and calls for further research into the techniques for fighting malicious activities related to blockchains.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.